00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 989 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.078 Using shallow fetch with depth 1 00:00:00.078 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.078 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.448 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.458 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.468 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.468 > git config core.sparsecheckout # timeout=10 00:00:03.478 > git read-tree -mu HEAD # timeout=10 00:00:03.493 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.512 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.512 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.642 [Pipeline] Start of Pipeline 00:00:03.657 [Pipeline] library 00:00:03.658 Loading library shm_lib@master 00:00:03.659 Library shm_lib@master is cached. Copying from home. 00:00:03.677 [Pipeline] node 00:00:03.690 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.692 [Pipeline] { 00:00:03.702 [Pipeline] catchError 00:00:03.704 [Pipeline] { 00:00:03.717 [Pipeline] wrap 00:00:03.726 [Pipeline] { 00:00:03.736 [Pipeline] stage 00:00:03.738 [Pipeline] { (Prologue) 00:00:03.756 [Pipeline] echo 00:00:03.758 Node: VM-host-WFP7 00:00:03.764 [Pipeline] cleanWs 00:00:03.776 [WS-CLEANUP] Deleting project workspace... 00:00:03.776 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.784 [WS-CLEANUP] done 00:00:03.981 [Pipeline] setCustomBuildProperty 00:00:04.062 [Pipeline] httpRequest 00:00:04.371 [Pipeline] echo 00:00:04.372 Sorcerer 10.211.164.20 is alive 00:00:04.381 [Pipeline] retry 00:00:04.383 [Pipeline] { 00:00:04.395 [Pipeline] httpRequest 00:00:04.399 HttpMethod: GET 00:00:04.400 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.401 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.402 Response Code: HTTP/1.1 200 OK 00:00:04.402 Success: Status code 200 is in the accepted range: 200,404 00:00:04.402 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.548 [Pipeline] } 00:00:04.565 [Pipeline] // retry 00:00:04.572 [Pipeline] sh 00:00:04.857 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.872 [Pipeline] httpRequest 00:00:05.493 [Pipeline] echo 00:00:05.495 Sorcerer 10.211.164.20 is alive 00:00:05.502 [Pipeline] retry 00:00:05.503 [Pipeline] { 00:00:05.514 [Pipeline] httpRequest 00:00:05.518 HttpMethod: GET 00:00:05.518 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:05.519 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:05.520 Response Code: HTTP/1.1 200 OK 00:00:05.521 Success: Status code 200 is in the accepted range: 200,404 00:00:05.521 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:27.990 [Pipeline] } 00:00:28.008 [Pipeline] // retry 00:00:28.016 [Pipeline] sh 00:00:28.302 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:30.858 [Pipeline] sh 00:00:31.148 + git -C spdk log --oneline -n5 00:00:31.148 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:31.148 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:31.148 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:31.148 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:31.148 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:00:31.171 [Pipeline] withCredentials 00:00:31.182 > git --version # timeout=10 00:00:31.196 > git --version # 'git version 2.39.2' 00:00:31.215 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.218 [Pipeline] { 00:00:31.226 [Pipeline] retry 00:00:31.228 [Pipeline] { 00:00:31.244 [Pipeline] sh 00:00:31.533 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:31.807 [Pipeline] } 00:00:31.826 [Pipeline] // retry 00:00:31.833 [Pipeline] } 00:00:31.853 [Pipeline] // withCredentials 00:00:31.864 [Pipeline] httpRequest 00:00:32.235 [Pipeline] echo 00:00:32.237 Sorcerer 10.211.164.20 is alive 00:00:32.248 [Pipeline] retry 00:00:32.250 [Pipeline] { 00:00:32.265 [Pipeline] httpRequest 00:00:32.270 HttpMethod: GET 00:00:32.271 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.271 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.282 Response Code: HTTP/1.1 200 OK 00:00:32.282 Success: Status code 200 is in the accepted range: 200,404 00:00:32.283 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:02.113 [Pipeline] } 00:01:02.130 [Pipeline] // retry 00:01:02.138 [Pipeline] sh 00:01:02.460 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:03.854 [Pipeline] sh 00:01:04.139 + git -C dpdk log --oneline -n5 00:01:04.139 eeb0605f11 version: 23.11.0 00:01:04.139 238778122a doc: update release notes for 23.11 00:01:04.139 46aa6b3cfc doc: fix description of RSS features 00:01:04.140 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:04.140 7e421ae345 devtools: support skipping forbid rule check 00:01:04.160 [Pipeline] writeFile 00:01:04.176 [Pipeline] sh 00:01:04.463 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.476 [Pipeline] sh 00:01:04.760 + cat autorun-spdk.conf 00:01:04.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.760 SPDK_RUN_ASAN=1 00:01:04.760 SPDK_RUN_UBSAN=1 00:01:04.760 SPDK_TEST_RAID=1 00:01:04.760 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:04.760 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.760 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.768 RUN_NIGHTLY=1 00:01:04.770 [Pipeline] } 00:01:04.784 [Pipeline] // stage 00:01:04.799 [Pipeline] stage 00:01:04.802 [Pipeline] { (Run VM) 00:01:04.815 [Pipeline] sh 00:01:05.100 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.100 + echo 'Start stage prepare_nvme.sh' 00:01:05.100 Start stage prepare_nvme.sh 00:01:05.100 + [[ -n 2 ]] 00:01:05.100 + disk_prefix=ex2 00:01:05.100 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:05.100 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:05.100 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:05.100 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.100 ++ SPDK_RUN_ASAN=1 00:01:05.100 ++ SPDK_RUN_UBSAN=1 00:01:05.100 ++ SPDK_TEST_RAID=1 00:01:05.100 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:05.100 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.100 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.100 ++ RUN_NIGHTLY=1 00:01:05.100 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:05.100 + nvme_files=() 00:01:05.100 + declare -A nvme_files 00:01:05.100 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.100 + nvme_files['nvme.img']=5G 00:01:05.100 + nvme_files['nvme-cmb.img']=5G 00:01:05.100 + nvme_files['nvme-multi0.img']=4G 00:01:05.100 + nvme_files['nvme-multi1.img']=4G 00:01:05.100 + nvme_files['nvme-multi2.img']=4G 00:01:05.100 + nvme_files['nvme-openstack.img']=8G 00:01:05.100 + nvme_files['nvme-zns.img']=5G 00:01:05.100 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.100 + (( SPDK_TEST_FTL == 1 )) 00:01:05.100 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.100 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:05.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.100 + for nvme in "${!nvme_files[@]}" 00:01:05.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:05.361 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.361 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:05.361 + echo 'End stage prepare_nvme.sh' 00:01:05.361 End stage prepare_nvme.sh 00:01:05.374 [Pipeline] sh 00:01:05.659 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.659 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:05.659 00:01:05.659 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:05.659 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:05.659 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:05.659 HELP=0 00:01:05.659 DRY_RUN=0 00:01:05.659 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:05.659 NVME_DISKS_TYPE=nvme,nvme, 00:01:05.659 NVME_AUTO_CREATE=0 00:01:05.659 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:05.659 NVME_CMB=,, 00:01:05.659 NVME_PMR=,, 00:01:05.659 NVME_ZNS=,, 00:01:05.659 NVME_MS=,, 00:01:05.659 NVME_FDP=,, 00:01:05.659 SPDK_VAGRANT_DISTRO=fedora39 00:01:05.659 SPDK_VAGRANT_VMCPU=10 00:01:05.659 SPDK_VAGRANT_VMRAM=12288 00:01:05.659 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.659 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.659 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.659 SPDK_OPENSTACK_NETWORK=0 00:01:05.659 VAGRANT_PACKAGE_BOX=0 00:01:05.659 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.659 FORCE_DISTRO=true 00:01:05.659 VAGRANT_BOX_VERSION= 00:01:05.659 EXTRA_VAGRANTFILES= 00:01:05.659 NIC_MODEL=virtio 00:01:05.659 00:01:05.659 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:05.659 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:07.567 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.138 ==> default: Creating image (snapshot of base box volume). 00:01:08.138 ==> default: Creating domain with the following settings... 00:01:08.138 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732164444_4b8e855fe18cd2df609c 00:01:08.138 ==> default: -- Domain type: kvm 00:01:08.138 ==> default: -- Cpus: 10 00:01:08.138 ==> default: -- Feature: acpi 00:01:08.138 ==> default: -- Feature: apic 00:01:08.138 ==> default: -- Feature: pae 00:01:08.138 ==> default: -- Memory: 12288M 00:01:08.138 ==> default: -- Memory Backing: hugepages: 00:01:08.138 ==> default: -- Management MAC: 00:01:08.138 ==> default: -- Loader: 00:01:08.138 ==> default: -- Nvram: 00:01:08.138 ==> default: -- Base box: spdk/fedora39 00:01:08.138 ==> default: -- Storage pool: default 00:01:08.138 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732164444_4b8e855fe18cd2df609c.img (20G) 00:01:08.138 ==> default: -- Volume Cache: default 00:01:08.138 ==> default: -- Kernel: 00:01:08.138 ==> default: -- Initrd: 00:01:08.138 ==> default: -- Graphics Type: vnc 00:01:08.138 ==> default: -- Graphics Port: -1 00:01:08.138 ==> default: -- Graphics IP: 127.0.0.1 00:01:08.138 ==> default: -- Graphics Password: Not defined 00:01:08.138 ==> default: -- Video Type: cirrus 00:01:08.138 ==> default: -- Video VRAM: 9216 00:01:08.138 ==> default: -- Sound Type: 00:01:08.138 ==> default: -- Keymap: en-us 00:01:08.138 ==> default: -- TPM Path: 00:01:08.138 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:08.138 ==> default: -- Command line args: 00:01:08.138 ==> default: -> value=-device, 00:01:08.138 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:08.138 ==> default: -> value=-drive, 00:01:08.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:08.138 ==> default: -> value=-device, 00:01:08.139 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.139 ==> default: -> value=-device, 00:01:08.139 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:08.139 ==> default: -> value=-drive, 00:01:08.139 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:08.139 ==> default: -> value=-device, 00:01:08.139 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.139 ==> default: -> value=-drive, 00:01:08.139 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:08.139 ==> default: -> value=-device, 00:01:08.139 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.139 ==> default: -> value=-drive, 00:01:08.139 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:08.139 ==> default: -> value=-device, 00:01:08.139 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.400 ==> default: Creating shared folders metadata... 00:01:08.400 ==> default: Starting domain. 00:01:09.783 ==> default: Waiting for domain to get an IP address... 00:01:27.885 ==> default: Waiting for SSH to become available... 00:01:27.885 ==> default: Configuring and enabling network interfaces... 00:01:33.173 default: SSH address: 192.168.121.244:22 00:01:33.173 default: SSH username: vagrant 00:01:33.173 default: SSH auth method: private key 00:01:36.475 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.609 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:49.885 ==> default: Mounting SSHFS shared folder... 00:01:52.423 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.423 ==> default: Checking Mount.. 00:01:54.329 ==> default: Folder Successfully Mounted! 00:01:54.329 ==> default: Running provisioner: file... 00:01:55.269 default: ~/.gitconfig => .gitconfig 00:01:55.840 00:01:55.840 SUCCESS! 00:01:55.840 00:01:55.840 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:55.840 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.840 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:55.840 00:01:55.850 [Pipeline] } 00:01:55.867 [Pipeline] // stage 00:01:55.878 [Pipeline] dir 00:01:55.878 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:55.880 [Pipeline] { 00:01:55.896 [Pipeline] catchError 00:01:55.898 [Pipeline] { 00:01:55.910 [Pipeline] sh 00:01:56.196 + vagrant ssh-config --host vagrant 00:01:56.196 + sed -ne /^Host/,$p 00:01:56.196 + tee ssh_conf 00:01:58.738 Host vagrant 00:01:58.738 HostName 192.168.121.244 00:01:58.738 User vagrant 00:01:58.738 Port 22 00:01:58.738 UserKnownHostsFile /dev/null 00:01:58.738 StrictHostKeyChecking no 00:01:58.738 PasswordAuthentication no 00:01:58.738 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.738 IdentitiesOnly yes 00:01:58.738 LogLevel FATAL 00:01:58.738 ForwardAgent yes 00:01:58.738 ForwardX11 yes 00:01:58.738 00:01:58.753 [Pipeline] withEnv 00:01:58.756 [Pipeline] { 00:01:58.771 [Pipeline] sh 00:01:59.056 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.056 source /etc/os-release 00:01:59.056 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.056 # Minimal, systemd-like check. 00:01:59.056 if [[ -e /.dockerenv ]]; then 00:01:59.056 # Clear garbage from the node's name: 00:01:59.056 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.056 # $HOSTNAME is the actual container id 00:01:59.056 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.056 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.056 # We can assume this is a mount from a host where container is running, 00:01:59.056 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.056 container="$(< /etc/hostname) ($agent)" 00:01:59.056 else 00:01:59.056 # Fallback 00:01:59.056 container=$agent 00:01:59.056 fi 00:01:59.056 fi 00:01:59.056 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.056 00:01:59.326 [Pipeline] } 00:01:59.344 [Pipeline] // withEnv 00:01:59.353 [Pipeline] setCustomBuildProperty 00:01:59.370 [Pipeline] stage 00:01:59.372 [Pipeline] { (Tests) 00:01:59.393 [Pipeline] sh 00:01:59.675 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.950 [Pipeline] sh 00:02:00.263 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.565 [Pipeline] timeout 00:02:00.565 Timeout set to expire in 1 hr 30 min 00:02:00.568 [Pipeline] { 00:02:00.585 [Pipeline] sh 00:02:00.869 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.440 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:01.453 [Pipeline] sh 00:02:01.733 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:02.006 [Pipeline] sh 00:02:02.286 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.559 [Pipeline] sh 00:02:02.837 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:03.095 ++ readlink -f spdk_repo 00:02:03.095 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.095 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.095 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.095 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.095 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.095 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.095 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.095 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:03.095 + cd /home/vagrant/spdk_repo 00:02:03.095 + source /etc/os-release 00:02:03.095 ++ NAME='Fedora Linux' 00:02:03.095 ++ VERSION='39 (Cloud Edition)' 00:02:03.095 ++ ID=fedora 00:02:03.095 ++ VERSION_ID=39 00:02:03.095 ++ VERSION_CODENAME= 00:02:03.095 ++ PLATFORM_ID=platform:f39 00:02:03.095 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.095 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.095 ++ LOGO=fedora-logo-icon 00:02:03.095 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.095 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.095 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.095 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.095 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.095 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.095 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.095 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.095 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.095 ++ SUPPORT_END=2024-11-12 00:02:03.095 ++ VARIANT='Cloud Edition' 00:02:03.095 ++ VARIANT_ID=cloud 00:02:03.095 + uname -a 00:02:03.095 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.095 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.662 Hugepages 00:02:03.662 node hugesize free / total 00:02:03.662 node0 1048576kB 0 / 0 00:02:03.662 node0 2048kB 0 / 0 00:02:03.662 00:02:03.662 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.662 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.662 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.662 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.662 + rm -f /tmp/spdk-ld-path 00:02:03.662 + source autorun-spdk.conf 00:02:03.662 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.662 ++ SPDK_RUN_ASAN=1 00:02:03.662 ++ SPDK_RUN_UBSAN=1 00:02:03.662 ++ SPDK_TEST_RAID=1 00:02:03.662 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.662 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.662 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.662 ++ RUN_NIGHTLY=1 00:02:03.662 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.662 + [[ -n '' ]] 00:02:03.662 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.662 + for M in /var/spdk/build-*-manifest.txt 00:02:03.662 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.662 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.662 + for M in /var/spdk/build-*-manifest.txt 00:02:03.662 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.662 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.662 + for M in /var/spdk/build-*-manifest.txt 00:02:03.662 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.662 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.662 ++ uname 00:02:03.662 + [[ Linux == \L\i\n\u\x ]] 00:02:03.662 + sudo dmesg -T 00:02:03.662 + sudo dmesg --clear 00:02:03.662 + dmesg_pid=6158 00:02:03.662 + [[ Fedora Linux == FreeBSD ]] 00:02:03.662 + sudo dmesg -Tw 00:02:03.662 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.662 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.662 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.662 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.662 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.662 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.662 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.662 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.662 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.662 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.662 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.662 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.662 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.662 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.662 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.920 04:48:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.920 04:48:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.920 04:48:20 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:03.920 04:48:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:03.920 04:48:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.920 04:48:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.920 04:48:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.920 04:48:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:03.920 04:48:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.920 04:48:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.920 04:48:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.920 04:48:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.921 04:48:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.921 04:48:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.921 04:48:20 -- paths/export.sh@5 -- $ export PATH 00:02:03.921 04:48:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.921 04:48:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.921 04:48:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:03.921 04:48:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732164500.XXXXXX 00:02:03.921 04:48:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732164500.zVY1Rc 00:02:03.921 04:48:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:03.921 04:48:20 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:03.921 04:48:20 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:03.921 04:48:20 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:03.921 04:48:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.921 04:48:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.921 04:48:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:03.921 04:48:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:03.921 04:48:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.921 04:48:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:03.921 04:48:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:03.921 04:48:20 -- pm/common@17 -- $ local monitor 00:02:03.921 04:48:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.921 04:48:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.921 04:48:20 -- pm/common@21 -- $ date +%s 00:02:03.921 04:48:20 -- pm/common@25 -- $ sleep 1 00:02:03.921 04:48:20 -- pm/common@21 -- $ date +%s 00:02:03.921 04:48:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732164500 00:02:03.921 04:48:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732164500 00:02:03.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732164500_collect-cpu-load.pm.log 00:02:03.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732164500_collect-vmstat.pm.log 00:02:04.855 04:48:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:04.855 04:48:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.855 04:48:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.855 04:48:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.855 04:48:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.855 Thu Nov 21 04:48:21 AM UTC 2024 00:02:04.855 04:48:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.855 v25.01-pre-219-g557f022f6 00:02:04.855 04:48:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:04.855 04:48:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:04.855 04:48:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.855 04:48:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.855 04:48:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.855 ************************************ 00:02:04.855 START TEST asan 00:02:04.855 ************************************ 00:02:04.855 using asan 00:02:04.855 04:48:21 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:04.855 00:02:04.855 real 0m0.001s 00:02:04.855 user 0m0.000s 00:02:04.855 sys 0m0.000s 00:02:04.855 04:48:21 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:04.855 04:48:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.855 ************************************ 00:02:04.855 END TEST asan 00:02:04.855 ************************************ 00:02:05.115 04:48:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.115 04:48:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.115 04:48:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.115 04:48:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.115 04:48:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.115 ************************************ 00:02:05.115 START TEST ubsan 00:02:05.115 ************************************ 00:02:05.115 using ubsan 00:02:05.115 04:48:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:05.115 00:02:05.115 real 0m0.000s 00:02:05.115 user 0m0.000s 00:02:05.115 sys 0m0.000s 00:02:05.115 04:48:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:05.115 04:48:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.115 ************************************ 00:02:05.115 END TEST ubsan 00:02:05.115 ************************************ 00:02:05.115 04:48:21 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:05.115 04:48:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:05.115 04:48:21 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:05.115 04:48:21 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:05.115 04:48:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.115 04:48:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.115 ************************************ 00:02:05.115 START TEST build_native_dpdk 00:02:05.115 ************************************ 00:02:05.115 04:48:21 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:05.115 eeb0605f11 version: 23.11.0 00:02:05.115 238778122a doc: update release notes for 23.11 00:02:05.115 46aa6b3cfc doc: fix description of RSS features 00:02:05.115 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:05.115 7e421ae345 devtools: support skipping forbid rule check 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:05.115 04:48:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:05.116 patching file config/rte_config.h 00:02:05.116 Hunk #1 succeeded at 60 (offset 1 line). 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:05.116 patching file lib/pcapng/rte_pcapng.c 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:05.116 04:48:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:05.116 04:48:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:11.695 The Meson build system 00:02:11.695 Version: 1.5.0 00:02:11.695 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:11.695 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:11.695 Build type: native build 00:02:11.695 Program cat found: YES (/usr/bin/cat) 00:02:11.695 Project name: DPDK 00:02:11.695 Project version: 23.11.0 00:02:11.695 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.695 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:11.695 Host machine cpu family: x86_64 00:02:11.695 Host machine cpu: x86_64 00:02:11.695 Message: ## Building in Developer Mode ## 00:02:11.695 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.695 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:11.695 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.695 Program python3 found: YES (/usr/bin/python3) 00:02:11.695 Program cat found: YES (/usr/bin/cat) 00:02:11.695 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:11.695 Compiler for C supports arguments -march=native: YES 00:02:11.695 Checking for size of "void *" : 8 00:02:11.695 Checking for size of "void *" : 8 (cached) 00:02:11.695 Library m found: YES 00:02:11.695 Library numa found: YES 00:02:11.695 Has header "numaif.h" : YES 00:02:11.695 Library fdt found: NO 00:02:11.695 Library execinfo found: NO 00:02:11.695 Has header "execinfo.h" : YES 00:02:11.695 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.695 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.695 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.695 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.695 Run-time dependency openssl found: YES 3.1.1 00:02:11.695 Run-time dependency libpcap found: YES 1.10.4 00:02:11.695 Has header "pcap.h" with dependency libpcap: YES 00:02:11.695 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.695 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.695 Compiler for C supports arguments -Wformat: YES 00:02:11.695 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.695 Compiler for C supports arguments -Wformat-security: NO 00:02:11.695 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.695 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.695 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.695 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.695 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.695 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.695 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.695 Compiler for C supports arguments -Wundef: YES 00:02:11.695 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.695 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.695 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.695 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.695 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.695 Program objdump found: YES (/usr/bin/objdump) 00:02:11.695 Compiler for C supports arguments -mavx512f: YES 00:02:11.695 Checking if "AVX512 checking" compiles: YES 00:02:11.695 Fetching value of define "__SSE4_2__" : 1 00:02:11.695 Fetching value of define "__AES__" : 1 00:02:11.695 Fetching value of define "__AVX__" : 1 00:02:11.695 Fetching value of define "__AVX2__" : 1 00:02:11.695 Fetching value of define "__AVX512BW__" : 1 00:02:11.695 Fetching value of define "__AVX512CD__" : 1 00:02:11.695 Fetching value of define "__AVX512DQ__" : 1 00:02:11.695 Fetching value of define "__AVX512F__" : 1 00:02:11.695 Fetching value of define "__AVX512VL__" : 1 00:02:11.695 Fetching value of define "__PCLMUL__" : 1 00:02:11.695 Fetching value of define "__RDRND__" : 1 00:02:11.695 Fetching value of define "__RDSEED__" : 1 00:02:11.695 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.695 Fetching value of define "__znver1__" : (undefined) 00:02:11.695 Fetching value of define "__znver2__" : (undefined) 00:02:11.695 Fetching value of define "__znver3__" : (undefined) 00:02:11.695 Fetching value of define "__znver4__" : (undefined) 00:02:11.695 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.695 Message: lib/log: Defining dependency "log" 00:02:11.695 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.695 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.695 Checking for function "getentropy" : NO 00:02:11.695 Message: lib/eal: Defining dependency "eal" 00:02:11.695 Message: lib/ring: Defining dependency "ring" 00:02:11.695 Message: lib/rcu: Defining dependency "rcu" 00:02:11.695 Message: lib/mempool: Defining dependency "mempool" 00:02:11.695 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.695 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.695 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.695 Compiler for C supports arguments -mpclmul: YES 00:02:11.695 Compiler for C supports arguments -maes: YES 00:02:11.695 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.695 Compiler for C supports arguments -mavx512bw: YES 00:02:11.695 Compiler for C supports arguments -mavx512dq: YES 00:02:11.695 Compiler for C supports arguments -mavx512vl: YES 00:02:11.695 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.695 Compiler for C supports arguments -mavx2: YES 00:02:11.695 Compiler for C supports arguments -mavx: YES 00:02:11.695 Message: lib/net: Defining dependency "net" 00:02:11.695 Message: lib/meter: Defining dependency "meter" 00:02:11.695 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.695 Message: lib/pci: Defining dependency "pci" 00:02:11.695 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.695 Message: lib/metrics: Defining dependency "metrics" 00:02:11.695 Message: lib/hash: Defining dependency "hash" 00:02:11.695 Message: lib/timer: Defining dependency "timer" 00:02:11.695 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.695 Message: lib/acl: Defining dependency "acl" 00:02:11.695 Message: lib/bbdev: Defining dependency "bbdev" 00:02:11.695 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:11.695 Run-time dependency libelf found: YES 0.191 00:02:11.695 Message: lib/bpf: Defining dependency "bpf" 00:02:11.695 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:11.695 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.695 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.695 Message: lib/distributor: Defining dependency "distributor" 00:02:11.695 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.695 Message: lib/efd: Defining dependency "efd" 00:02:11.695 Message: lib/eventdev: Defining dependency "eventdev" 00:02:11.695 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:11.695 Message: lib/gpudev: Defining dependency "gpudev" 00:02:11.695 Message: lib/gro: Defining dependency "gro" 00:02:11.695 Message: lib/gso: Defining dependency "gso" 00:02:11.695 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:11.695 Message: lib/jobstats: Defining dependency "jobstats" 00:02:11.695 Message: lib/latencystats: Defining dependency "latencystats" 00:02:11.695 Message: lib/lpm: Defining dependency "lpm" 00:02:11.695 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:11.695 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:11.695 Message: lib/member: Defining dependency "member" 00:02:11.695 Message: lib/pcapng: Defining dependency "pcapng" 00:02:11.695 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.695 Message: lib/power: Defining dependency "power" 00:02:11.695 Message: lib/rawdev: Defining dependency "rawdev" 00:02:11.695 Message: lib/regexdev: Defining dependency "regexdev" 00:02:11.695 Message: lib/mldev: Defining dependency "mldev" 00:02:11.695 Message: lib/rib: Defining dependency "rib" 00:02:11.695 Message: lib/reorder: Defining dependency "reorder" 00:02:11.695 Message: lib/sched: Defining dependency "sched" 00:02:11.695 Message: lib/security: Defining dependency "security" 00:02:11.695 Message: lib/stack: Defining dependency "stack" 00:02:11.695 Has header "linux/userfaultfd.h" : YES 00:02:11.695 Has header "linux/vduse.h" : YES 00:02:11.695 Message: lib/vhost: Defining dependency "vhost" 00:02:11.695 Message: lib/ipsec: Defining dependency "ipsec" 00:02:11.695 Message: lib/pdcp: Defining dependency "pdcp" 00:02:11.695 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.695 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.695 Message: lib/fib: Defining dependency "fib" 00:02:11.695 Message: lib/port: Defining dependency "port" 00:02:11.695 Message: lib/pdump: Defining dependency "pdump" 00:02:11.695 Message: lib/table: Defining dependency "table" 00:02:11.695 Message: lib/pipeline: Defining dependency "pipeline" 00:02:11.695 Message: lib/graph: Defining dependency "graph" 00:02:11.695 Message: lib/node: Defining dependency "node" 00:02:11.696 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.696 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.696 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.267 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.267 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.267 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.267 Compiler for C supports arguments -Wno-format: YES 00:02:12.267 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.267 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:12.267 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.267 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:12.267 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:12.267 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.267 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.267 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.267 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:12.267 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:12.267 Has header "sys/epoll.h" : YES 00:02:12.267 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.267 Configuring doxy-api-html.conf using configuration 00:02:12.267 Configuring doxy-api-man.conf using configuration 00:02:12.267 Program mandb found: YES (/usr/bin/mandb) 00:02:12.267 Program sphinx-build found: NO 00:02:12.267 Configuring rte_build_config.h using configuration 00:02:12.267 Message: 00:02:12.267 ================= 00:02:12.267 Applications Enabled 00:02:12.267 ================= 00:02:12.267 00:02:12.267 apps: 00:02:12.267 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:12.267 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:12.267 test-pmd, test-regex, test-sad, test-security-perf, 00:02:12.267 00:02:12.267 Message: 00:02:12.267 ================= 00:02:12.267 Libraries Enabled 00:02:12.267 ================= 00:02:12.267 00:02:12.267 libs: 00:02:12.267 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.267 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:12.267 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:12.267 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:12.267 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:12.267 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:12.267 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:12.267 00:02:12.267 00:02:12.267 Message: 00:02:12.267 =============== 00:02:12.267 Drivers Enabled 00:02:12.267 =============== 00:02:12.267 00:02:12.267 common: 00:02:12.267 00:02:12.267 bus: 00:02:12.267 pci, vdev, 00:02:12.267 mempool: 00:02:12.267 ring, 00:02:12.267 dma: 00:02:12.267 00:02:12.267 net: 00:02:12.267 i40e, 00:02:12.267 raw: 00:02:12.267 00:02:12.267 crypto: 00:02:12.267 00:02:12.267 compress: 00:02:12.267 00:02:12.267 regex: 00:02:12.267 00:02:12.267 ml: 00:02:12.267 00:02:12.267 vdpa: 00:02:12.267 00:02:12.267 event: 00:02:12.267 00:02:12.267 baseband: 00:02:12.267 00:02:12.267 gpu: 00:02:12.267 00:02:12.267 00:02:12.267 Message: 00:02:12.267 ================= 00:02:12.267 Content Skipped 00:02:12.267 ================= 00:02:12.267 00:02:12.267 apps: 00:02:12.267 00:02:12.267 libs: 00:02:12.267 00:02:12.267 drivers: 00:02:12.267 common/cpt: not in enabled drivers build config 00:02:12.267 common/dpaax: not in enabled drivers build config 00:02:12.267 common/iavf: not in enabled drivers build config 00:02:12.267 common/idpf: not in enabled drivers build config 00:02:12.267 common/mvep: not in enabled drivers build config 00:02:12.267 common/octeontx: not in enabled drivers build config 00:02:12.267 bus/auxiliary: not in enabled drivers build config 00:02:12.267 bus/cdx: not in enabled drivers build config 00:02:12.267 bus/dpaa: not in enabled drivers build config 00:02:12.267 bus/fslmc: not in enabled drivers build config 00:02:12.267 bus/ifpga: not in enabled drivers build config 00:02:12.267 bus/platform: not in enabled drivers build config 00:02:12.267 bus/vmbus: not in enabled drivers build config 00:02:12.267 common/cnxk: not in enabled drivers build config 00:02:12.267 common/mlx5: not in enabled drivers build config 00:02:12.267 common/nfp: not in enabled drivers build config 00:02:12.267 common/qat: not in enabled drivers build config 00:02:12.267 common/sfc_efx: not in enabled drivers build config 00:02:12.267 mempool/bucket: not in enabled drivers build config 00:02:12.267 mempool/cnxk: not in enabled drivers build config 00:02:12.267 mempool/dpaa: not in enabled drivers build config 00:02:12.267 mempool/dpaa2: not in enabled drivers build config 00:02:12.267 mempool/octeontx: not in enabled drivers build config 00:02:12.267 mempool/stack: not in enabled drivers build config 00:02:12.267 dma/cnxk: not in enabled drivers build config 00:02:12.267 dma/dpaa: not in enabled drivers build config 00:02:12.267 dma/dpaa2: not in enabled drivers build config 00:02:12.267 dma/hisilicon: not in enabled drivers build config 00:02:12.268 dma/idxd: not in enabled drivers build config 00:02:12.268 dma/ioat: not in enabled drivers build config 00:02:12.268 dma/skeleton: not in enabled drivers build config 00:02:12.268 net/af_packet: not in enabled drivers build config 00:02:12.268 net/af_xdp: not in enabled drivers build config 00:02:12.268 net/ark: not in enabled drivers build config 00:02:12.268 net/atlantic: not in enabled drivers build config 00:02:12.268 net/avp: not in enabled drivers build config 00:02:12.268 net/axgbe: not in enabled drivers build config 00:02:12.268 net/bnx2x: not in enabled drivers build config 00:02:12.268 net/bnxt: not in enabled drivers build config 00:02:12.268 net/bonding: not in enabled drivers build config 00:02:12.268 net/cnxk: not in enabled drivers build config 00:02:12.268 net/cpfl: not in enabled drivers build config 00:02:12.268 net/cxgbe: not in enabled drivers build config 00:02:12.268 net/dpaa: not in enabled drivers build config 00:02:12.268 net/dpaa2: not in enabled drivers build config 00:02:12.268 net/e1000: not in enabled drivers build config 00:02:12.268 net/ena: not in enabled drivers build config 00:02:12.268 net/enetc: not in enabled drivers build config 00:02:12.268 net/enetfec: not in enabled drivers build config 00:02:12.268 net/enic: not in enabled drivers build config 00:02:12.268 net/failsafe: not in enabled drivers build config 00:02:12.268 net/fm10k: not in enabled drivers build config 00:02:12.268 net/gve: not in enabled drivers build config 00:02:12.268 net/hinic: not in enabled drivers build config 00:02:12.268 net/hns3: not in enabled drivers build config 00:02:12.268 net/iavf: not in enabled drivers build config 00:02:12.268 net/ice: not in enabled drivers build config 00:02:12.268 net/idpf: not in enabled drivers build config 00:02:12.268 net/igc: not in enabled drivers build config 00:02:12.268 net/ionic: not in enabled drivers build config 00:02:12.268 net/ipn3ke: not in enabled drivers build config 00:02:12.268 net/ixgbe: not in enabled drivers build config 00:02:12.268 net/mana: not in enabled drivers build config 00:02:12.268 net/memif: not in enabled drivers build config 00:02:12.268 net/mlx4: not in enabled drivers build config 00:02:12.268 net/mlx5: not in enabled drivers build config 00:02:12.268 net/mvneta: not in enabled drivers build config 00:02:12.268 net/mvpp2: not in enabled drivers build config 00:02:12.268 net/netvsc: not in enabled drivers build config 00:02:12.268 net/nfb: not in enabled drivers build config 00:02:12.268 net/nfp: not in enabled drivers build config 00:02:12.268 net/ngbe: not in enabled drivers build config 00:02:12.268 net/null: not in enabled drivers build config 00:02:12.268 net/octeontx: not in enabled drivers build config 00:02:12.268 net/octeon_ep: not in enabled drivers build config 00:02:12.268 net/pcap: not in enabled drivers build config 00:02:12.268 net/pfe: not in enabled drivers build config 00:02:12.268 net/qede: not in enabled drivers build config 00:02:12.268 net/ring: not in enabled drivers build config 00:02:12.268 net/sfc: not in enabled drivers build config 00:02:12.268 net/softnic: not in enabled drivers build config 00:02:12.268 net/tap: not in enabled drivers build config 00:02:12.268 net/thunderx: not in enabled drivers build config 00:02:12.268 net/txgbe: not in enabled drivers build config 00:02:12.268 net/vdev_netvsc: not in enabled drivers build config 00:02:12.268 net/vhost: not in enabled drivers build config 00:02:12.268 net/virtio: not in enabled drivers build config 00:02:12.268 net/vmxnet3: not in enabled drivers build config 00:02:12.268 raw/cnxk_bphy: not in enabled drivers build config 00:02:12.268 raw/cnxk_gpio: not in enabled drivers build config 00:02:12.268 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:12.268 raw/ifpga: not in enabled drivers build config 00:02:12.268 raw/ntb: not in enabled drivers build config 00:02:12.268 raw/skeleton: not in enabled drivers build config 00:02:12.268 crypto/armv8: not in enabled drivers build config 00:02:12.268 crypto/bcmfs: not in enabled drivers build config 00:02:12.268 crypto/caam_jr: not in enabled drivers build config 00:02:12.268 crypto/ccp: not in enabled drivers build config 00:02:12.268 crypto/cnxk: not in enabled drivers build config 00:02:12.268 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.268 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.268 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.268 crypto/mlx5: not in enabled drivers build config 00:02:12.268 crypto/mvsam: not in enabled drivers build config 00:02:12.268 crypto/nitrox: not in enabled drivers build config 00:02:12.268 crypto/null: not in enabled drivers build config 00:02:12.268 crypto/octeontx: not in enabled drivers build config 00:02:12.268 crypto/openssl: not in enabled drivers build config 00:02:12.268 crypto/scheduler: not in enabled drivers build config 00:02:12.268 crypto/uadk: not in enabled drivers build config 00:02:12.268 crypto/virtio: not in enabled drivers build config 00:02:12.268 compress/isal: not in enabled drivers build config 00:02:12.268 compress/mlx5: not in enabled drivers build config 00:02:12.268 compress/octeontx: not in enabled drivers build config 00:02:12.268 compress/zlib: not in enabled drivers build config 00:02:12.268 regex/mlx5: not in enabled drivers build config 00:02:12.268 regex/cn9k: not in enabled drivers build config 00:02:12.268 ml/cnxk: not in enabled drivers build config 00:02:12.268 vdpa/ifc: not in enabled drivers build config 00:02:12.268 vdpa/mlx5: not in enabled drivers build config 00:02:12.268 vdpa/nfp: not in enabled drivers build config 00:02:12.268 vdpa/sfc: not in enabled drivers build config 00:02:12.268 event/cnxk: not in enabled drivers build config 00:02:12.268 event/dlb2: not in enabled drivers build config 00:02:12.268 event/dpaa: not in enabled drivers build config 00:02:12.268 event/dpaa2: not in enabled drivers build config 00:02:12.268 event/dsw: not in enabled drivers build config 00:02:12.268 event/opdl: not in enabled drivers build config 00:02:12.268 event/skeleton: not in enabled drivers build config 00:02:12.268 event/sw: not in enabled drivers build config 00:02:12.268 event/octeontx: not in enabled drivers build config 00:02:12.268 baseband/acc: not in enabled drivers build config 00:02:12.268 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:12.268 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:12.268 baseband/la12xx: not in enabled drivers build config 00:02:12.268 baseband/null: not in enabled drivers build config 00:02:12.268 baseband/turbo_sw: not in enabled drivers build config 00:02:12.268 gpu/cuda: not in enabled drivers build config 00:02:12.268 00:02:12.268 00:02:12.268 Build targets in project: 217 00:02:12.268 00:02:12.268 DPDK 23.11.0 00:02:12.268 00:02:12.268 User defined options 00:02:12.268 libdir : lib 00:02:12.268 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:12.268 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:12.268 c_link_args : 00:02:12.268 enable_docs : false 00:02:12.268 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:12.268 enable_kmods : false 00:02:12.268 machine : native 00:02:12.268 tests : false 00:02:12.268 00:02:12.268 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.268 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:12.528 04:48:29 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:12.528 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:12.528 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.528 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.528 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.528 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.528 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.528 [6/707] Linking static target lib/librte_kvargs.a 00:02:12.528 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.787 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.787 [9/707] Linking static target lib/librte_log.a 00:02:12.787 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.787 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.787 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.787 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.047 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.047 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.047 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.047 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.047 [18/707] Linking target lib/librte_log.so.24.0 00:02:13.047 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.047 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.318 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.318 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.318 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.318 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.318 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.318 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:13.591 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.591 [28/707] Linking target lib/librte_kvargs.so.24.0 00:02:13.591 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.591 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.591 [31/707] Linking static target lib/librte_telemetry.a 00:02:13.591 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:13.591 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.591 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.591 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.591 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.591 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.851 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.851 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.851 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.851 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.851 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.851 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:13.851 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.851 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.851 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:14.110 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.110 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.110 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.110 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.110 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.110 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.370 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.370 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.370 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.370 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.370 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.370 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.370 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.370 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.370 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.370 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.370 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.630 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.630 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.630 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.630 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.630 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.890 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.890 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.890 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.890 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.890 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.890 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.890 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.890 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.890 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.890 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.150 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.150 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.150 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.150 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.150 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.150 [84/707] Linking static target lib/librte_ring.a 00:02:15.150 [85/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.410 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.410 [87/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.410 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.410 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.410 [90/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.410 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.410 [92/707] Linking static target lib/librte_eal.a 00:02:15.669 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.669 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.669 [95/707] Linking static target lib/librte_mempool.a 00:02:15.669 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.669 [97/707] Linking static target lib/librte_rcu.a 00:02:15.929 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.929 [99/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.929 [100/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.929 [101/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.929 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.929 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.929 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.929 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.189 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.189 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.189 [108/707] Linking static target lib/librte_net.a 00:02:16.189 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.189 [110/707] Linking static target lib/librte_mbuf.a 00:02:16.189 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.189 [112/707] Linking static target lib/librte_meter.a 00:02:16.189 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.449 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.449 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.449 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.449 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.449 [118/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.708 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.708 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.968 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.227 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.227 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.227 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.227 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.227 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.227 [127/707] Linking static target lib/librte_pci.a 00:02:17.227 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.227 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.486 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.486 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.486 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.486 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.486 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.486 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.486 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.486 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.487 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.487 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.487 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.746 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.746 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.746 [143/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.746 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.746 [145/707] Linking static target lib/librte_cmdline.a 00:02:18.006 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:18.006 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:18.006 [148/707] Linking static target lib/librte_metrics.a 00:02:18.006 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.006 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.266 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.266 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.266 [153/707] Linking static target lib/librte_timer.a 00:02:18.266 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.525 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.525 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:18.525 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.783 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:18.783 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:18.783 [160/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:19.043 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.043 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:19.043 [163/707] Linking static target lib/librte_bitratestats.a 00:02:19.303 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.303 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:19.303 [166/707] Linking static target lib/librte_bbdev.a 00:02:19.303 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:19.562 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:19.822 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.822 [170/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.822 [171/707] Linking static target lib/librte_hash.a 00:02:19.822 [172/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.822 [173/707] Linking static target lib/librte_ethdev.a 00:02:19.822 [174/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.822 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:19.822 [176/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:19.822 [177/707] Linking static target lib/acl/libavx2_tmp.a 00:02:20.082 [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:20.082 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:20.082 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:20.341 [181/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.341 [182/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:20.341 [183/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.341 [184/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:20.341 [185/707] Linking static target lib/librte_cfgfile.a 00:02:20.341 [186/707] Linking target lib/librte_eal.so.24.0 00:02:20.599 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.599 [188/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:20.599 [189/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.599 [190/707] Linking target lib/librte_meter.so.24.0 00:02:20.599 [191/707] Linking target lib/librte_ring.so.24.0 00:02:20.599 [192/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:20.599 [193/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.599 [194/707] Linking target lib/librte_pci.so.24.0 00:02:20.599 [195/707] Linking target lib/librte_timer.so.24.0 00:02:20.599 [196/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.599 [197/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:20.599 [198/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:20.599 [199/707] Linking target lib/librte_cfgfile.so.24.0 00:02:20.858 [200/707] Linking target lib/librte_rcu.so.24.0 00:02:20.858 [201/707] Linking target lib/librte_mempool.so.24.0 00:02:20.858 [202/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:20.858 [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:20.858 [204/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:20.858 [205/707] Linking static target lib/librte_bpf.a 00:02:20.858 [206/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.858 [207/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.858 [208/707] Linking static target lib/librte_compressdev.a 00:02:20.858 [209/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.858 [210/707] Linking target lib/librte_mbuf.so.24.0 00:02:20.858 [211/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.117 [212/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.117 [213/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:21.117 [214/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.117 [215/707] Linking target lib/librte_net.so.24.0 00:02:21.117 [216/707] Linking static target lib/librte_acl.a 00:02:21.117 [217/707] Linking target lib/librte_bbdev.so.24.0 00:02:21.117 [218/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.117 [219/707] Linking target lib/librte_cmdline.so.24.0 00:02:21.117 [220/707] Linking target lib/librte_hash.so.24.0 00:02:21.118 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.376 [222/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:21.376 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:21.376 [224/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.376 [225/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.376 [226/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.376 [227/707] Linking target lib/librte_compressdev.so.24.0 00:02:21.376 [228/707] Linking target lib/librte_acl.so.24.0 00:02:21.376 [229/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:21.376 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:21.376 [231/707] Linking static target lib/librte_distributor.a 00:02:21.636 [232/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.636 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:21.636 [234/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [235/707] Linking target lib/librte_distributor.so.24.0 00:02:21.896 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.896 [237/707] Linking static target lib/librte_dmadev.a 00:02:21.896 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:22.155 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.155 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:22.155 [241/707] Linking target lib/librte_dmadev.so.24.0 00:02:22.155 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:22.155 [243/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:22.414 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:22.414 [245/707] Linking static target lib/librte_efd.a 00:02:22.414 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:22.683 [247/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.683 [248/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.683 [249/707] Linking static target lib/librte_cryptodev.a 00:02:22.683 [250/707] Linking target lib/librte_efd.so.24.0 00:02:22.683 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:22.684 [252/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:22.684 [253/707] Linking static target lib/librte_dispatcher.a 00:02:22.952 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:22.952 [255/707] Linking static target lib/librte_gpudev.a 00:02:22.952 [256/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:22.952 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:22.952 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:23.211 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.211 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:23.471 [261/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:23.471 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:23.471 [263/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:23.471 [264/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.471 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:23.732 [266/707] Linking static target lib/librte_gro.a 00:02:23.733 [267/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:23.733 [268/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.733 [269/707] Linking target lib/librte_gpudev.so.24.0 00:02:23.733 [270/707] Linking target lib/librte_cryptodev.so.24.0 00:02:23.733 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:23.733 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:23.733 [273/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:23.733 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:23.733 [275/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.733 [276/707] Linking static target lib/librte_eventdev.a 00:02:23.993 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:23.993 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:23.993 [279/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.993 [280/707] Linking target lib/librte_ethdev.so.24.0 00:02:23.993 [281/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:23.993 [282/707] Linking static target lib/librte_gso.a 00:02:23.993 [283/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:23.993 [284/707] Linking target lib/librte_metrics.so.24.0 00:02:24.253 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:24.253 [286/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.253 [287/707] Linking target lib/librte_gro.so.24.0 00:02:24.253 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.253 [289/707] Linking target lib/librte_bpf.so.24.0 00:02:24.253 [290/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:24.253 [291/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:24.253 [292/707] Linking static target lib/librte_jobstats.a 00:02:24.253 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:24.253 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:24.253 [295/707] Linking target lib/librte_gso.so.24.0 00:02:24.253 [296/707] Linking target lib/librte_bitratestats.so.24.0 00:02:24.253 [297/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:24.253 [298/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:24.512 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:24.512 [300/707] Linking static target lib/librte_ip_frag.a 00:02:24.512 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.512 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:24.512 [303/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.771 [304/707] Linking target lib/librte_ip_frag.so.24.0 00:02:24.771 [305/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:24.771 [306/707] Linking static target lib/librte_latencystats.a 00:02:24.771 [307/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:24.771 [308/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:24.771 [309/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:24.771 [310/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:24.771 [311/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:24.771 [312/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.771 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.771 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.029 [315/707] Linking target lib/librte_latencystats.so.24.0 00:02:25.029 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.029 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:25.029 [318/707] Linking static target lib/librte_lpm.a 00:02:25.288 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:25.288 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.288 [321/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.288 [322/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.288 [323/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:25.288 [324/707] Linking target lib/librte_lpm.so.24.0 00:02:25.288 [325/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.288 [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.288 [327/707] Linking static target lib/librte_pcapng.a 00:02:25.546 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.546 [329/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.546 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:25.546 [331/707] Linking target lib/librte_eventdev.so.24.0 00:02:25.546 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.546 [333/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.546 [334/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:25.546 [335/707] Linking target lib/librte_pcapng.so.24.0 00:02:25.546 [336/707] Linking target lib/librte_dispatcher.so.24.0 00:02:25.806 [337/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.806 [338/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.806 [339/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:25.806 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:25.806 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.806 [342/707] Linking static target lib/librte_power.a 00:02:25.806 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.806 [344/707] Linking static target lib/librte_regexdev.a 00:02:26.065 [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:26.065 [346/707] Linking static target lib/librte_rawdev.a 00:02:26.065 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:26.065 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:26.065 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:26.065 [350/707] Linking static target lib/librte_member.a 00:02:26.065 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:26.325 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:26.325 [353/707] Linking static target lib/librte_mldev.a 00:02:26.325 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:26.325 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.325 [356/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:26.325 [357/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.325 [358/707] Linking target lib/librte_rawdev.so.24.0 00:02:26.584 [359/707] Linking target lib/librte_member.so.24.0 00:02:26.584 [360/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.584 [361/707] Linking target lib/librte_power.so.24.0 00:02:26.584 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:26.584 [363/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:26.584 [364/707] Linking static target lib/librte_rib.a 00:02:26.584 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.584 [366/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.584 [367/707] Linking static target lib/librte_reorder.a 00:02:26.584 [368/707] Linking target lib/librte_regexdev.so.24.0 00:02:26.584 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:26.844 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.844 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:26.844 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:26.844 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:26.844 [374/707] Linking static target lib/librte_stack.a 00:02:26.844 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.844 [376/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.844 [377/707] Linking target lib/librte_reorder.so.24.0 00:02:26.844 [378/707] Linking target lib/librte_rib.so.24.0 00:02:26.844 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.844 [380/707] Linking static target lib/librte_security.a 00:02:27.104 [381/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.104 [382/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:27.104 [383/707] Linking target lib/librte_stack.so.24.0 00:02:27.104 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:27.104 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.365 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.365 [387/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.365 [388/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.365 [389/707] Linking target lib/librte_security.so.24.0 00:02:27.365 [390/707] Linking target lib/librte_mldev.so.24.0 00:02:27.365 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.365 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:27.624 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:27.625 [394/707] Linking static target lib/librte_sched.a 00:02:27.625 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.885 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.885 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.885 [398/707] Linking target lib/librte_sched.so.24.0 00:02:27.885 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:27.885 [400/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:27.885 [401/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:28.145 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:28.145 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.405 [404/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:28.405 [405/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:28.405 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:28.405 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:28.665 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:28.665 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:28.665 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:28.666 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:28.666 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:28.666 [413/707] Linking static target lib/librte_ipsec.a 00:02:28.926 [414/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:28.926 [415/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.926 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.926 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:29.186 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:29.186 [419/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:29.186 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:29.446 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:29.446 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:29.446 [423/707] Linking static target lib/librte_fib.a 00:02:29.446 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:29.707 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:29.707 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:29.707 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:29.707 [428/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:29.707 [429/707] Linking static target lib/librte_pdcp.a 00:02:29.707 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:29.707 [431/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.967 [432/707] Linking target lib/librte_fib.so.24.0 00:02:29.967 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.967 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:30.227 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:30.227 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:30.227 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:30.487 [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:30.487 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:30.487 [440/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:30.487 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:30.748 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:30.748 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:30.748 [444/707] Linking static target lib/librte_port.a 00:02:30.748 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:31.008 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:31.008 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:31.008 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:31.008 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:31.008 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:31.267 [451/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:31.267 [452/707] Linking static target lib/librte_pdump.a 00:02:31.267 [453/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.267 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:31.267 [455/707] Linking target lib/librte_port.so.24.0 00:02:31.267 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.527 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:31.527 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:31.527 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:31.787 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:31.787 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:31.787 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:31.787 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:31.787 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:32.047 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:32.047 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:32.047 [467/707] Linking static target lib/librte_table.a 00:02:32.047 [468/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:32.047 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.307 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:32.567 [471/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.567 [472/707] Linking target lib/librte_table.so.24.0 00:02:32.567 [473/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:32.567 [474/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:32.827 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:32.827 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:32.827 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:32.827 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:33.086 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:33.086 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:33.086 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:33.347 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:33.347 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:33.607 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:33.607 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:33.607 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:33.607 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:33.607 [488/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:33.607 [489/707] Linking static target lib/librte_graph.a 00:02:33.867 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:34.127 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:34.127 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.127 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:34.127 [494/707] Linking target lib/librte_graph.so.24.0 00:02:34.388 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:34.388 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:34.388 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:34.388 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:34.388 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:34.388 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:34.648 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:34.648 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.648 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:34.648 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:34.908 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:34.908 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:34.908 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.908 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.169 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.169 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:35.169 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.169 [512/707] Linking static target lib/librte_node.a 00:02:35.169 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.169 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.429 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.429 [516/707] Linking target lib/librte_node.so.24.0 00:02:35.429 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.429 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.429 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.429 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.429 [521/707] Linking static target drivers/librte_bus_pci.a 00:02:35.723 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.723 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.723 [524/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:35.723 [525/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.723 [526/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.723 [527/707] Linking static target drivers/librte_bus_vdev.a 00:02:35.723 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:35.723 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:35.995 [530/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.995 [531/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.995 [532/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:35.995 [533/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.995 [534/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:35.995 [535/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.995 [536/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:35.995 [537/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:35.995 [538/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.995 [539/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.995 [540/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:35.995 [541/707] Linking static target drivers/librte_mempool_ring.a 00:02:35.995 [542/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.252 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:36.252 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:36.511 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:36.770 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:36.770 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:37.336 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:37.336 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:37.592 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:37.592 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:37.592 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:37.592 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:37.592 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:37.850 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:37.850 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:38.109 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:38.109 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:38.109 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:38.368 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:38.368 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:38.627 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:38.627 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:38.887 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:38.887 [565/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:38.887 [566/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:38.887 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:39.145 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:39.145 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:39.145 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:39.145 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:39.404 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:39.404 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:39.404 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:39.404 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:39.663 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:39.664 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:39.664 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:39.922 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:39.922 [580/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:39.922 [581/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:40.181 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.181 [583/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:40.181 [584/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:40.181 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:40.440 [586/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:40.440 [587/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:40.440 [588/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.440 [589/707] Linking static target drivers/librte_net_i40e.a 00:02:40.698 [590/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:40.698 [591/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.698 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:40.698 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:40.956 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:40.956 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:40.957 [596/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:40.957 [597/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.957 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:41.216 [599/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:41.216 [600/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.216 [601/707] Linking static target lib/librte_vhost.a 00:02:41.216 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.480 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:41.480 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:41.480 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:41.480 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:41.480 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:41.480 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:41.747 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:41.747 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:41.747 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:42.007 [612/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:42.007 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:42.007 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:42.007 [615/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.007 [616/707] Linking target lib/librte_vhost.so.24.0 00:02:42.007 [617/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:42.266 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:42.526 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:42.526 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:43.095 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:43.095 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:43.095 [623/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:43.355 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:43.355 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:43.355 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:43.355 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:43.355 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:43.355 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:43.615 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:43.615 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:43.615 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:43.615 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:43.615 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:43.875 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:43.875 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:43.875 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:44.135 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:44.135 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:44.135 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:44.135 [641/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:44.135 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:44.396 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:44.396 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:44.396 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:44.655 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:44.655 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:44.655 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:44.655 [649/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:44.915 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:44.915 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:45.175 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:45.175 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:45.175 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:45.175 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:45.435 [656/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:45.435 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:45.435 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:45.435 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:45.695 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:45.955 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:45.955 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:45.955 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:45.955 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:46.215 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:46.475 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:46.475 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:46.475 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:46.475 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:46.735 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:46.735 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:46.995 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:46.995 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:47.255 [674/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:47.255 [675/707] Linking static target lib/librte_pipeline.a 00:02:47.255 [676/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:47.514 [677/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:47.514 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:47.514 [679/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:47.514 [680/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:47.514 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:47.774 [682/707] Linking target app/dpdk-dumpcap 00:02:47.774 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:47.774 [684/707] Linking target app/dpdk-graph 00:02:47.774 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:48.034 [686/707] Linking target app/dpdk-pdump 00:02:48.034 [687/707] Linking target app/dpdk-proc-info 00:02:48.034 [688/707] Linking target app/dpdk-test-acl 00:02:48.034 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:48.034 [690/707] Linking target app/dpdk-test-bbdev 00:02:48.034 [691/707] Linking target app/dpdk-test-cmdline 00:02:48.294 [692/707] Linking target app/dpdk-test-crypto-perf 00:02:48.294 [693/707] Linking target app/dpdk-test-compress-perf 00:02:48.294 [694/707] Linking target app/dpdk-test-dma-perf 00:02:48.294 [695/707] Linking target app/dpdk-test-fib 00:02:48.294 [696/707] Linking target app/dpdk-test-eventdev 00:02:48.294 [697/707] Linking target app/dpdk-test-flow-perf 00:02:48.294 [698/707] Linking target app/dpdk-test-gpudev 00:02:48.294 [699/707] Linking target app/dpdk-test-mldev 00:02:48.553 [700/707] Linking target app/dpdk-test-pipeline 00:02:48.553 [701/707] Linking target app/dpdk-test-sad 00:02:48.553 [702/707] Linking target app/dpdk-test-regex 00:02:48.812 [703/707] Linking target app/dpdk-testpmd 00:02:49.380 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:49.640 [705/707] Linking target app/dpdk-test-security-perf 00:02:52.943 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.943 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:52.943 04:49:09 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:52.943 04:49:09 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.943 04:49:09 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:52.943 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:52.943 [0/1] Installing files. 00:02:52.943 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.944 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.946 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.947 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.947 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.208 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.209 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.469 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.469 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.469 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.469 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.469 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.469 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.470 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.471 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.472 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.473 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.473 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:53.473 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:53.473 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:53.473 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:53.473 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:53.473 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:53.473 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:53.473 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:53.473 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:53.473 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:53.473 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:53.473 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:53.473 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:53.473 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:53.473 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:53.473 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:53.473 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:53.473 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:53.473 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:53.473 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:53.473 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:53.473 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:53.474 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:53.474 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:53.474 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:53.474 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:53.474 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:53.474 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:53.474 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:53.474 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:53.474 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:53.474 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:53.474 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:53.474 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:53.474 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:53.474 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:53.474 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:53.474 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:53.474 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:53.474 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:53.474 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:53.474 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:53.474 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:53.474 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:53.474 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:53.474 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:53.474 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:53.474 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:53.474 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:53.474 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:53.474 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:53.474 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:53.474 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:53.474 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:53.474 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:53.474 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:53.474 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:53.474 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:53.474 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:53.474 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:53.474 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:53.474 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:53.474 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:53.474 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:53.474 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:53.474 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:53.474 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:53.474 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:53.474 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:53.474 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:53.474 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:53.474 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:53.474 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:53.474 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:53.474 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:53.474 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:53.474 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:53.474 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:53.474 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:53.474 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:53.474 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:53.474 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:53.474 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:53.474 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:53.474 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:53.474 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:53.474 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:53.474 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:53.474 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:53.474 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:53.474 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:53.474 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:53.474 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:53.474 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:53.474 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:53.474 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:53.474 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:53.474 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:53.474 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:53.474 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:53.474 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:53.474 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:53.474 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:53.474 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:53.474 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:53.474 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:53.474 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:53.474 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:53.474 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:53.474 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:53.474 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:53.474 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:53.474 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:53.474 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:53.474 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:53.474 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:53.474 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:53.475 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:53.475 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:53.475 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:53.475 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:53.475 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:53.475 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:53.475 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:53.475 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:53.475 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:53.475 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:53.475 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:53.475 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:53.475 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:53.475 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:53.475 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:53.475 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:53.475 04:49:10 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:53.735 04:49:10 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.735 00:02:53.735 real 0m48.498s 00:02:53.735 user 5m5.383s 00:02:53.735 sys 0m58.257s 00:02:53.735 04:49:10 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.735 04:49:10 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:53.735 ************************************ 00:02:53.735 END TEST build_native_dpdk 00:02:53.735 ************************************ 00:02:53.735 04:49:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.735 04:49:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:53.735 04:49:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:53.735 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:53.995 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.995 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:53.995 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:54.563 Using 'verbs' RDMA provider 00:03:10.491 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:28.587 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:28.587 Creating mk/config.mk...done. 00:03:28.587 Creating mk/cc.flags.mk...done. 00:03:28.587 Type 'make' to build. 00:03:28.587 04:49:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:28.587 04:49:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:28.587 04:49:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:28.587 04:49:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.587 ************************************ 00:03:28.587 START TEST make 00:03:28.587 ************************************ 00:03:28.587 04:49:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:28.587 make[1]: Nothing to be done for 'all'. 00:04:15.318 CC lib/log/log.o 00:04:15.318 CC lib/log/log_flags.o 00:04:15.318 CC lib/log/log_deprecated.o 00:04:15.318 CC lib/ut_mock/mock.o 00:04:15.318 CC lib/ut/ut.o 00:04:15.318 LIB libspdk_log.a 00:04:15.318 LIB libspdk_ut_mock.a 00:04:15.318 LIB libspdk_ut.a 00:04:15.318 SO libspdk_log.so.7.1 00:04:15.318 SO libspdk_ut_mock.so.6.0 00:04:15.318 SO libspdk_ut.so.2.0 00:04:15.318 SYMLINK libspdk_log.so 00:04:15.318 SYMLINK libspdk_ut_mock.so 00:04:15.318 SYMLINK libspdk_ut.so 00:04:15.318 CC lib/dma/dma.o 00:04:15.318 CC lib/ioat/ioat.o 00:04:15.318 CXX lib/trace_parser/trace.o 00:04:15.318 CC lib/util/bit_array.o 00:04:15.318 CC lib/util/cpuset.o 00:04:15.318 CC lib/util/crc16.o 00:04:15.318 CC lib/util/crc32.o 00:04:15.318 CC lib/util/crc32c.o 00:04:15.318 CC lib/util/base64.o 00:04:15.318 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.318 CC lib/util/crc32_ieee.o 00:04:15.318 CC lib/util/crc64.o 00:04:15.318 LIB libspdk_dma.a 00:04:15.318 CC lib/util/dif.o 00:04:15.318 CC lib/vfio_user/host/vfio_user.o 00:04:15.318 SO libspdk_dma.so.5.0 00:04:15.318 CC lib/util/fd.o 00:04:15.318 CC lib/util/fd_group.o 00:04:15.318 CC lib/util/file.o 00:04:15.318 SYMLINK libspdk_dma.so 00:04:15.318 CC lib/util/hexlify.o 00:04:15.318 LIB libspdk_ioat.a 00:04:15.318 SO libspdk_ioat.so.7.0 00:04:15.318 CC lib/util/iov.o 00:04:15.318 SYMLINK libspdk_ioat.so 00:04:15.318 CC lib/util/math.o 00:04:15.318 CC lib/util/net.o 00:04:15.318 CC lib/util/pipe.o 00:04:15.318 LIB libspdk_vfio_user.a 00:04:15.318 CC lib/util/strerror_tls.o 00:04:15.318 CC lib/util/string.o 00:04:15.318 SO libspdk_vfio_user.so.5.0 00:04:15.318 SYMLINK libspdk_vfio_user.so 00:04:15.318 CC lib/util/uuid.o 00:04:15.318 CC lib/util/xor.o 00:04:15.318 CC lib/util/zipf.o 00:04:15.318 CC lib/util/md5.o 00:04:15.318 LIB libspdk_util.a 00:04:15.318 SO libspdk_util.so.10.1 00:04:15.318 LIB libspdk_trace_parser.a 00:04:15.318 SO libspdk_trace_parser.so.6.0 00:04:15.318 SYMLINK libspdk_util.so 00:04:15.318 SYMLINK libspdk_trace_parser.so 00:04:15.318 CC lib/vmd/vmd.o 00:04:15.318 CC lib/env_dpdk/env.o 00:04:15.318 CC lib/rdma_utils/rdma_utils.o 00:04:15.318 CC lib/vmd/led.o 00:04:15.318 CC lib/env_dpdk/memory.o 00:04:15.318 CC lib/env_dpdk/init.o 00:04:15.318 CC lib/env_dpdk/pci.o 00:04:15.318 CC lib/idxd/idxd.o 00:04:15.318 CC lib/json/json_parse.o 00:04:15.318 CC lib/conf/conf.o 00:04:15.318 CC lib/idxd/idxd_user.o 00:04:15.318 LIB libspdk_conf.a 00:04:15.318 CC lib/json/json_util.o 00:04:15.318 SO libspdk_conf.so.6.0 00:04:15.318 LIB libspdk_rdma_utils.a 00:04:15.318 SO libspdk_rdma_utils.so.1.0 00:04:15.318 SYMLINK libspdk_conf.so 00:04:15.318 CC lib/json/json_write.o 00:04:15.318 SYMLINK libspdk_rdma_utils.so 00:04:15.318 CC lib/idxd/idxd_kernel.o 00:04:15.318 CC lib/env_dpdk/threads.o 00:04:15.318 CC lib/env_dpdk/pci_ioat.o 00:04:15.318 CC lib/env_dpdk/pci_virtio.o 00:04:15.318 CC lib/env_dpdk/pci_vmd.o 00:04:15.318 CC lib/env_dpdk/pci_idxd.o 00:04:15.318 CC lib/env_dpdk/pci_event.o 00:04:15.318 CC lib/env_dpdk/sigbus_handler.o 00:04:15.318 LIB libspdk_json.a 00:04:15.318 CC lib/env_dpdk/pci_dpdk.o 00:04:15.318 SO libspdk_json.so.6.0 00:04:15.318 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.318 CC lib/rdma_provider/common.o 00:04:15.318 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:15.318 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.318 SYMLINK libspdk_json.so 00:04:15.318 LIB libspdk_idxd.a 00:04:15.318 SO libspdk_idxd.so.12.1 00:04:15.318 LIB libspdk_vmd.a 00:04:15.318 SO libspdk_vmd.so.6.0 00:04:15.318 SYMLINK libspdk_idxd.so 00:04:15.318 SYMLINK libspdk_vmd.so 00:04:15.318 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.318 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.318 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.318 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.318 LIB libspdk_rdma_provider.a 00:04:15.318 SO libspdk_rdma_provider.so.7.0 00:04:15.318 SYMLINK libspdk_rdma_provider.so 00:04:15.318 LIB libspdk_jsonrpc.a 00:04:15.318 SO libspdk_jsonrpc.so.6.0 00:04:15.318 SYMLINK libspdk_jsonrpc.so 00:04:15.318 LIB libspdk_env_dpdk.a 00:04:15.318 CC lib/rpc/rpc.o 00:04:15.318 SO libspdk_env_dpdk.so.15.1 00:04:15.318 SYMLINK libspdk_env_dpdk.so 00:04:15.318 LIB libspdk_rpc.a 00:04:15.318 SO libspdk_rpc.so.6.0 00:04:15.318 SYMLINK libspdk_rpc.so 00:04:15.318 CC lib/notify/notify_rpc.o 00:04:15.318 CC lib/notify/notify.o 00:04:15.318 CC lib/trace/trace_rpc.o 00:04:15.318 CC lib/trace/trace.o 00:04:15.318 CC lib/trace/trace_flags.o 00:04:15.318 CC lib/keyring/keyring.o 00:04:15.318 CC lib/keyring/keyring_rpc.o 00:04:15.318 LIB libspdk_notify.a 00:04:15.318 SO libspdk_notify.so.6.0 00:04:15.318 LIB libspdk_keyring.a 00:04:15.318 SYMLINK libspdk_notify.so 00:04:15.318 LIB libspdk_trace.a 00:04:15.318 SO libspdk_keyring.so.2.0 00:04:15.318 SO libspdk_trace.so.11.0 00:04:15.318 SYMLINK libspdk_keyring.so 00:04:15.318 SYMLINK libspdk_trace.so 00:04:15.318 CC lib/sock/sock.o 00:04:15.318 CC lib/sock/sock_rpc.o 00:04:15.318 CC lib/thread/thread.o 00:04:15.318 CC lib/thread/iobuf.o 00:04:15.318 LIB libspdk_sock.a 00:04:15.318 SO libspdk_sock.so.10.0 00:04:15.318 SYMLINK libspdk_sock.so 00:04:15.318 CC lib/nvme/nvme_ctrlr.o 00:04:15.318 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:15.318 CC lib/nvme/nvme_fabric.o 00:04:15.318 CC lib/nvme/nvme_ns_cmd.o 00:04:15.318 CC lib/nvme/nvme_ns.o 00:04:15.318 CC lib/nvme/nvme_pcie_common.o 00:04:15.318 CC lib/nvme/nvme_pcie.o 00:04:15.318 CC lib/nvme/nvme.o 00:04:15.318 CC lib/nvme/nvme_qpair.o 00:04:15.318 CC lib/nvme/nvme_quirks.o 00:04:15.318 LIB libspdk_thread.a 00:04:15.318 CC lib/nvme/nvme_transport.o 00:04:15.318 SO libspdk_thread.so.11.0 00:04:15.318 CC lib/nvme/nvme_discovery.o 00:04:15.318 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.318 SYMLINK libspdk_thread.so 00:04:15.318 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:15.318 CC lib/nvme/nvme_tcp.o 00:04:15.318 CC lib/nvme/nvme_opal.o 00:04:15.318 CC lib/accel/accel.o 00:04:15.318 CC lib/accel/accel_rpc.o 00:04:15.318 CC lib/nvme/nvme_io_msg.o 00:04:15.318 CC lib/accel/accel_sw.o 00:04:15.318 CC lib/nvme/nvme_poll_group.o 00:04:15.318 CC lib/blob/blobstore.o 00:04:15.318 CC lib/blob/request.o 00:04:15.318 CC lib/init/json_config.o 00:04:15.318 CC lib/blob/zeroes.o 00:04:15.318 CC lib/init/subsystem.o 00:04:15.318 CC lib/blob/blob_bs_dev.o 00:04:15.318 CC lib/init/subsystem_rpc.o 00:04:15.578 CC lib/init/rpc.o 00:04:15.578 CC lib/virtio/virtio.o 00:04:15.578 CC lib/fsdev/fsdev.o 00:04:15.578 CC lib/fsdev/fsdev_io.o 00:04:15.578 CC lib/fsdev/fsdev_rpc.o 00:04:15.578 LIB libspdk_init.a 00:04:15.578 CC lib/virtio/virtio_vhost_user.o 00:04:15.578 SO libspdk_init.so.6.0 00:04:15.838 SYMLINK libspdk_init.so 00:04:15.838 CC lib/nvme/nvme_zns.o 00:04:15.838 CC lib/virtio/virtio_vfio_user.o 00:04:15.838 CC lib/virtio/virtio_pci.o 00:04:15.838 LIB libspdk_accel.a 00:04:15.838 CC lib/nvme/nvme_stubs.o 00:04:16.097 CC lib/nvme/nvme_auth.o 00:04:16.097 SO libspdk_accel.so.16.0 00:04:16.097 CC lib/event/app.o 00:04:16.097 SYMLINK libspdk_accel.so 00:04:16.097 CC lib/event/reactor.o 00:04:16.097 CC lib/event/log_rpc.o 00:04:16.097 CC lib/nvme/nvme_cuse.o 00:04:16.097 LIB libspdk_virtio.a 00:04:16.097 SO libspdk_virtio.so.7.0 00:04:16.356 LIB libspdk_fsdev.a 00:04:16.356 CC lib/event/app_rpc.o 00:04:16.356 SYMLINK libspdk_virtio.so 00:04:16.356 SO libspdk_fsdev.so.2.0 00:04:16.356 CC lib/nvme/nvme_rdma.o 00:04:16.356 SYMLINK libspdk_fsdev.so 00:04:16.356 CC lib/event/scheduler_static.o 00:04:16.356 CC lib/bdev/bdev.o 00:04:16.356 CC lib/bdev/bdev_rpc.o 00:04:16.616 CC lib/bdev/bdev_zone.o 00:04:16.616 CC lib/bdev/part.o 00:04:16.616 LIB libspdk_event.a 00:04:16.616 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:16.616 SO libspdk_event.so.14.0 00:04:16.616 CC lib/bdev/scsi_nvme.o 00:04:16.616 SYMLINK libspdk_event.so 00:04:17.191 LIB libspdk_fuse_dispatcher.a 00:04:17.191 SO libspdk_fuse_dispatcher.so.1.0 00:04:17.470 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.729 LIB libspdk_nvme.a 00:04:17.729 SO libspdk_nvme.so.15.0 00:04:17.988 SYMLINK libspdk_nvme.so 00:04:18.558 LIB libspdk_blob.a 00:04:18.558 SO libspdk_blob.so.11.0 00:04:18.818 SYMLINK libspdk_blob.so 00:04:19.079 CC lib/lvol/lvol.o 00:04:19.079 CC lib/blobfs/blobfs.o 00:04:19.079 CC lib/blobfs/tree.o 00:04:19.339 LIB libspdk_bdev.a 00:04:19.339 SO libspdk_bdev.so.17.0 00:04:19.339 SYMLINK libspdk_bdev.so 00:04:19.598 CC lib/scsi/dev.o 00:04:19.598 CC lib/scsi/lun.o 00:04:19.598 CC lib/scsi/port.o 00:04:19.598 CC lib/scsi/scsi.o 00:04:19.598 CC lib/ublk/ublk.o 00:04:19.598 CC lib/nbd/nbd.o 00:04:19.598 CC lib/ftl/ftl_core.o 00:04:19.598 CC lib/nvmf/ctrlr.o 00:04:19.856 CC lib/nvmf/ctrlr_discovery.o 00:04:19.856 CC lib/ftl/ftl_init.o 00:04:19.856 CC lib/ftl/ftl_layout.o 00:04:19.856 CC lib/scsi/scsi_bdev.o 00:04:20.114 LIB libspdk_blobfs.a 00:04:20.114 CC lib/ftl/ftl_debug.o 00:04:20.114 SO libspdk_blobfs.so.10.0 00:04:20.114 CC lib/scsi/scsi_pr.o 00:04:20.114 CC lib/nbd/nbd_rpc.o 00:04:20.114 SYMLINK libspdk_blobfs.so 00:04:20.114 CC lib/ftl/ftl_io.o 00:04:20.114 LIB libspdk_lvol.a 00:04:20.114 SO libspdk_lvol.so.10.0 00:04:20.114 SYMLINK libspdk_lvol.so 00:04:20.114 CC lib/ublk/ublk_rpc.o 00:04:20.114 CC lib/nvmf/ctrlr_bdev.o 00:04:20.114 CC lib/ftl/ftl_sb.o 00:04:20.372 LIB libspdk_nbd.a 00:04:20.372 SO libspdk_nbd.so.7.0 00:04:20.372 CC lib/scsi/scsi_rpc.o 00:04:20.372 CC lib/scsi/task.o 00:04:20.372 SYMLINK libspdk_nbd.so 00:04:20.372 CC lib/ftl/ftl_l2p.o 00:04:20.372 CC lib/nvmf/subsystem.o 00:04:20.372 LIB libspdk_ublk.a 00:04:20.372 CC lib/nvmf/nvmf.o 00:04:20.372 SO libspdk_ublk.so.3.0 00:04:20.372 CC lib/ftl/ftl_l2p_flat.o 00:04:20.372 CC lib/ftl/ftl_nv_cache.o 00:04:20.372 SYMLINK libspdk_ublk.so 00:04:20.372 CC lib/ftl/ftl_band.o 00:04:20.631 CC lib/ftl/ftl_band_ops.o 00:04:20.631 CC lib/nvmf/nvmf_rpc.o 00:04:20.631 LIB libspdk_scsi.a 00:04:20.631 SO libspdk_scsi.so.9.0 00:04:20.631 CC lib/nvmf/transport.o 00:04:20.631 SYMLINK libspdk_scsi.so 00:04:20.631 CC lib/nvmf/tcp.o 00:04:20.890 CC lib/ftl/ftl_writer.o 00:04:20.890 CC lib/ftl/ftl_rq.o 00:04:21.149 CC lib/nvmf/stubs.o 00:04:21.149 CC lib/iscsi/conn.o 00:04:21.149 CC lib/vhost/vhost.o 00:04:21.407 CC lib/vhost/vhost_rpc.o 00:04:21.407 CC lib/nvmf/mdns_server.o 00:04:21.665 CC lib/ftl/ftl_reloc.o 00:04:21.665 CC lib/ftl/ftl_l2p_cache.o 00:04:21.665 CC lib/nvmf/rdma.o 00:04:21.665 CC lib/vhost/vhost_scsi.o 00:04:21.665 CC lib/vhost/vhost_blk.o 00:04:21.924 CC lib/iscsi/init_grp.o 00:04:21.924 CC lib/ftl/ftl_p2l.o 00:04:21.924 CC lib/iscsi/iscsi.o 00:04:22.181 CC lib/iscsi/param.o 00:04:22.181 CC lib/iscsi/portal_grp.o 00:04:22.181 CC lib/nvmf/auth.o 00:04:22.181 CC lib/iscsi/tgt_node.o 00:04:22.439 CC lib/ftl/ftl_p2l_log.o 00:04:22.439 CC lib/ftl/mngt/ftl_mngt.o 00:04:22.439 CC lib/iscsi/iscsi_subsystem.o 00:04:22.696 CC lib/iscsi/iscsi_rpc.o 00:04:22.696 CC lib/vhost/rte_vhost_user.o 00:04:22.696 CC lib/iscsi/task.o 00:04:22.696 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:22.696 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:22.696 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:22.954 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:22.954 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.954 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.954 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.954 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:23.213 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:23.213 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:23.213 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:23.213 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:23.213 CC lib/ftl/utils/ftl_conf.o 00:04:23.213 CC lib/ftl/utils/ftl_md.o 00:04:23.213 CC lib/ftl/utils/ftl_mempool.o 00:04:23.213 CC lib/ftl/utils/ftl_bitmap.o 00:04:23.473 CC lib/ftl/utils/ftl_property.o 00:04:23.473 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:23.473 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:23.473 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:23.473 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.473 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.473 LIB libspdk_iscsi.a 00:04:23.473 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:23.473 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:23.732 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.732 SO libspdk_iscsi.so.8.0 00:04:23.732 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.732 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.732 LIB libspdk_vhost.a 00:04:23.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.732 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:23.732 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:23.732 CC lib/ftl/base/ftl_base_dev.o 00:04:23.732 SYMLINK libspdk_iscsi.so 00:04:23.732 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.732 SO libspdk_vhost.so.8.0 00:04:23.990 CC lib/ftl/ftl_trace.o 00:04:23.990 SYMLINK libspdk_vhost.so 00:04:24.249 LIB libspdk_ftl.a 00:04:24.249 LIB libspdk_nvmf.a 00:04:24.249 SO libspdk_nvmf.so.20.0 00:04:24.509 SO libspdk_ftl.so.9.0 00:04:24.509 SYMLINK libspdk_nvmf.so 00:04:24.768 SYMLINK libspdk_ftl.so 00:04:25.029 CC module/env_dpdk/env_dpdk_rpc.o 00:04:25.029 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:25.029 CC module/keyring/file/keyring.o 00:04:25.029 CC module/scheduler/gscheduler/gscheduler.o 00:04:25.029 CC module/accel/error/accel_error.o 00:04:25.029 CC module/keyring/linux/keyring.o 00:04:25.029 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:25.029 CC module/blob/bdev/blob_bdev.o 00:04:25.029 CC module/fsdev/aio/fsdev_aio.o 00:04:25.029 CC module/sock/posix/posix.o 00:04:25.029 LIB libspdk_env_dpdk_rpc.a 00:04:25.288 SO libspdk_env_dpdk_rpc.so.6.0 00:04:25.288 LIB libspdk_scheduler_gscheduler.a 00:04:25.288 LIB libspdk_scheduler_dpdk_governor.a 00:04:25.288 CC module/keyring/file/keyring_rpc.o 00:04:25.288 SYMLINK libspdk_env_dpdk_rpc.so 00:04:25.288 CC module/keyring/linux/keyring_rpc.o 00:04:25.288 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.288 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:25.288 LIB libspdk_scheduler_dynamic.a 00:04:25.288 CC module/accel/error/accel_error_rpc.o 00:04:25.288 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.288 SO libspdk_scheduler_dynamic.so.4.0 00:04:25.288 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.288 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:25.288 LIB libspdk_keyring_file.a 00:04:25.288 LIB libspdk_blob_bdev.a 00:04:25.288 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.546 SO libspdk_keyring_file.so.2.0 00:04:25.547 LIB libspdk_keyring_linux.a 00:04:25.547 SO libspdk_blob_bdev.so.11.0 00:04:25.547 CC module/accel/ioat/accel_ioat.o 00:04:25.547 LIB libspdk_accel_error.a 00:04:25.547 SO libspdk_keyring_linux.so.1.0 00:04:25.547 SYMLINK libspdk_keyring_file.so 00:04:25.547 CC module/accel/dsa/accel_dsa.o 00:04:25.547 SO libspdk_accel_error.so.2.0 00:04:25.547 SYMLINK libspdk_blob_bdev.so 00:04:25.547 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.547 CC module/accel/ioat/accel_ioat_rpc.o 00:04:25.547 CC module/fsdev/aio/linux_aio_mgr.o 00:04:25.547 SYMLINK libspdk_keyring_linux.so 00:04:25.547 SYMLINK libspdk_accel_error.so 00:04:25.547 CC module/accel/iaa/accel_iaa.o 00:04:25.547 CC module/accel/iaa/accel_iaa_rpc.o 00:04:25.547 LIB libspdk_accel_ioat.a 00:04:25.805 SO libspdk_accel_ioat.so.6.0 00:04:25.805 SYMLINK libspdk_accel_ioat.so 00:04:25.805 CC module/bdev/delay/vbdev_delay.o 00:04:25.805 CC module/bdev/error/vbdev_error.o 00:04:25.805 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.805 LIB libspdk_accel_iaa.a 00:04:25.805 LIB libspdk_accel_dsa.a 00:04:25.805 SO libspdk_accel_iaa.so.3.0 00:04:25.805 SO libspdk_accel_dsa.so.5.0 00:04:25.805 LIB libspdk_fsdev_aio.a 00:04:25.805 CC module/bdev/gpt/gpt.o 00:04:25.805 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.805 SYMLINK libspdk_accel_iaa.so 00:04:25.805 SO libspdk_fsdev_aio.so.1.0 00:04:25.805 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.805 CC module/bdev/malloc/bdev_malloc.o 00:04:26.064 SYMLINK libspdk_accel_dsa.so 00:04:26.064 LIB libspdk_sock_posix.a 00:04:26.064 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:26.064 SYMLINK libspdk_fsdev_aio.so 00:04:26.064 SO libspdk_sock_posix.so.6.0 00:04:26.064 CC module/bdev/gpt/vbdev_gpt.o 00:04:26.064 CC module/bdev/null/bdev_null.o 00:04:26.064 CC module/bdev/error/vbdev_error_rpc.o 00:04:26.064 SYMLINK libspdk_sock_posix.so 00:04:26.064 CC module/bdev/nvme/bdev_nvme.o 00:04:26.064 LIB libspdk_blobfs_bdev.a 00:04:26.064 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:26.322 SO libspdk_blobfs_bdev.so.6.0 00:04:26.322 CC module/bdev/passthru/vbdev_passthru.o 00:04:26.322 LIB libspdk_bdev_error.a 00:04:26.322 SYMLINK libspdk_blobfs_bdev.so 00:04:26.322 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:26.322 SO libspdk_bdev_error.so.6.0 00:04:26.322 CC module/bdev/nvme/nvme_rpc.o 00:04:26.322 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:26.322 LIB libspdk_bdev_delay.a 00:04:26.322 SYMLINK libspdk_bdev_error.so 00:04:26.322 CC module/bdev/null/bdev_null_rpc.o 00:04:26.322 LIB libspdk_bdev_gpt.a 00:04:26.322 SO libspdk_bdev_delay.so.6.0 00:04:26.322 SO libspdk_bdev_gpt.so.6.0 00:04:26.322 SYMLINK libspdk_bdev_delay.so 00:04:26.581 LIB libspdk_bdev_lvol.a 00:04:26.581 SYMLINK libspdk_bdev_gpt.so 00:04:26.581 SO libspdk_bdev_lvol.so.6.0 00:04:26.581 LIB libspdk_bdev_malloc.a 00:04:26.581 CC module/bdev/raid/bdev_raid.o 00:04:26.581 SO libspdk_bdev_malloc.so.6.0 00:04:26.581 LIB libspdk_bdev_null.a 00:04:26.581 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:26.581 SO libspdk_bdev_null.so.6.0 00:04:26.581 SYMLINK libspdk_bdev_lvol.so 00:04:26.581 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.581 CC module/bdev/raid/bdev_raid_sb.o 00:04:26.581 SYMLINK libspdk_bdev_malloc.so 00:04:26.581 CC module/bdev/split/vbdev_split.o 00:04:26.581 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.581 SYMLINK libspdk_bdev_null.so 00:04:26.581 CC module/bdev/split/vbdev_split_rpc.o 00:04:26.840 CC module/bdev/aio/bdev_aio.o 00:04:26.840 LIB libspdk_bdev_passthru.a 00:04:26.840 SO libspdk_bdev_passthru.so.6.0 00:04:26.840 CC module/bdev/nvme/bdev_mdns_client.o 00:04:26.840 LIB libspdk_bdev_split.a 00:04:26.840 CC module/bdev/raid/raid0.o 00:04:26.840 SYMLINK libspdk_bdev_passthru.so 00:04:26.840 CC module/bdev/raid/raid1.o 00:04:26.840 SO libspdk_bdev_split.so.6.0 00:04:26.840 CC module/bdev/raid/concat.o 00:04:27.099 SYMLINK libspdk_bdev_split.so 00:04:27.099 CC module/bdev/raid/raid5f.o 00:04:27.099 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.099 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.099 CC module/bdev/nvme/vbdev_opal.o 00:04:27.099 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:27.099 LIB libspdk_bdev_zone_block.a 00:04:27.099 CC module/bdev/ftl/bdev_ftl.o 00:04:27.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:27.099 SO libspdk_bdev_zone_block.so.6.0 00:04:27.099 LIB libspdk_bdev_aio.a 00:04:27.099 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:27.099 SO libspdk_bdev_aio.so.6.0 00:04:27.358 SYMLINK libspdk_bdev_zone_block.so 00:04:27.358 SYMLINK libspdk_bdev_aio.so 00:04:27.358 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.358 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.358 LIB libspdk_bdev_ftl.a 00:04:27.617 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.617 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.617 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.617 SO libspdk_bdev_ftl.so.6.0 00:04:27.617 SYMLINK libspdk_bdev_ftl.so 00:04:27.618 LIB libspdk_bdev_raid.a 00:04:27.618 SO libspdk_bdev_raid.so.6.0 00:04:27.877 LIB libspdk_bdev_iscsi.a 00:04:27.877 SYMLINK libspdk_bdev_raid.so 00:04:27.877 SO libspdk_bdev_iscsi.so.6.0 00:04:27.877 SYMLINK libspdk_bdev_iscsi.so 00:04:28.137 LIB libspdk_bdev_virtio.a 00:04:28.137 SO libspdk_bdev_virtio.so.6.0 00:04:28.137 SYMLINK libspdk_bdev_virtio.so 00:04:29.077 LIB libspdk_bdev_nvme.a 00:04:29.077 SO libspdk_bdev_nvme.so.7.1 00:04:29.077 SYMLINK libspdk_bdev_nvme.so 00:04:29.646 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.646 CC module/event/subsystems/sock/sock.o 00:04:29.646 CC module/event/subsystems/fsdev/fsdev.o 00:04:29.646 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.646 CC module/event/subsystems/vmd/vmd.o 00:04:29.646 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.646 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.646 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.646 CC module/event/subsystems/keyring/keyring.o 00:04:29.905 LIB libspdk_event_fsdev.a 00:04:29.905 LIB libspdk_event_sock.a 00:04:29.905 LIB libspdk_event_scheduler.a 00:04:29.905 LIB libspdk_event_vhost_blk.a 00:04:29.905 LIB libspdk_event_vmd.a 00:04:29.905 LIB libspdk_event_keyring.a 00:04:29.905 LIB libspdk_event_iobuf.a 00:04:29.905 SO libspdk_event_vhost_blk.so.3.0 00:04:29.905 SO libspdk_event_fsdev.so.1.0 00:04:29.905 SO libspdk_event_sock.so.5.0 00:04:29.905 SO libspdk_event_scheduler.so.4.0 00:04:29.905 SO libspdk_event_keyring.so.1.0 00:04:29.905 SO libspdk_event_vmd.so.6.0 00:04:29.905 SO libspdk_event_iobuf.so.3.0 00:04:29.905 SYMLINK libspdk_event_vhost_blk.so 00:04:29.905 SYMLINK libspdk_event_sock.so 00:04:29.905 SYMLINK libspdk_event_fsdev.so 00:04:29.905 SYMLINK libspdk_event_keyring.so 00:04:29.905 SYMLINK libspdk_event_scheduler.so 00:04:29.905 SYMLINK libspdk_event_vmd.so 00:04:29.905 SYMLINK libspdk_event_iobuf.so 00:04:30.474 CC module/event/subsystems/accel/accel.o 00:04:30.474 LIB libspdk_event_accel.a 00:04:30.474 SO libspdk_event_accel.so.6.0 00:04:30.474 SYMLINK libspdk_event_accel.so 00:04:31.045 CC module/event/subsystems/bdev/bdev.o 00:04:31.045 LIB libspdk_event_bdev.a 00:04:31.305 SO libspdk_event_bdev.so.6.0 00:04:31.305 SYMLINK libspdk_event_bdev.so 00:04:31.564 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.564 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.564 CC module/event/subsystems/scsi/scsi.o 00:04:31.564 CC module/event/subsystems/ublk/ublk.o 00:04:31.564 CC module/event/subsystems/nbd/nbd.o 00:04:31.823 LIB libspdk_event_ublk.a 00:04:31.823 LIB libspdk_event_scsi.a 00:04:31.823 SO libspdk_event_ublk.so.3.0 00:04:31.823 LIB libspdk_event_nbd.a 00:04:31.823 LIB libspdk_event_nvmf.a 00:04:31.823 SO libspdk_event_scsi.so.6.0 00:04:31.823 SO libspdk_event_nbd.so.6.0 00:04:31.823 SYMLINK libspdk_event_ublk.so 00:04:31.823 SYMLINK libspdk_event_scsi.so 00:04:31.823 SO libspdk_event_nvmf.so.6.0 00:04:31.823 SYMLINK libspdk_event_nbd.so 00:04:31.823 SYMLINK libspdk_event_nvmf.so 00:04:32.083 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.342 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.342 LIB libspdk_event_vhost_scsi.a 00:04:32.342 LIB libspdk_event_iscsi.a 00:04:32.342 SO libspdk_event_vhost_scsi.so.3.0 00:04:32.342 SO libspdk_event_iscsi.so.6.0 00:04:32.342 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.601 SYMLINK libspdk_event_iscsi.so 00:04:32.601 SO libspdk.so.6.0 00:04:32.601 SYMLINK libspdk.so 00:04:32.859 CC test/rpc_client/rpc_client_test.o 00:04:32.859 CXX app/trace/trace.o 00:04:32.859 TEST_HEADER include/spdk/accel.h 00:04:32.859 TEST_HEADER include/spdk/accel_module.h 00:04:32.859 TEST_HEADER include/spdk/assert.h 00:04:32.859 TEST_HEADER include/spdk/barrier.h 00:04:32.859 TEST_HEADER include/spdk/base64.h 00:04:32.859 TEST_HEADER include/spdk/bdev.h 00:04:32.859 TEST_HEADER include/spdk/bdev_module.h 00:04:32.859 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.859 TEST_HEADER include/spdk/bit_array.h 00:04:32.859 TEST_HEADER include/spdk/bit_pool.h 00:04:32.859 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.859 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.859 TEST_HEADER include/spdk/blobfs.h 00:04:32.859 TEST_HEADER include/spdk/blob.h 00:04:32.859 TEST_HEADER include/spdk/conf.h 00:04:33.118 TEST_HEADER include/spdk/config.h 00:04:33.118 TEST_HEADER include/spdk/cpuset.h 00:04:33.118 TEST_HEADER include/spdk/crc16.h 00:04:33.118 TEST_HEADER include/spdk/crc32.h 00:04:33.118 TEST_HEADER include/spdk/crc64.h 00:04:33.118 TEST_HEADER include/spdk/dif.h 00:04:33.118 TEST_HEADER include/spdk/dma.h 00:04:33.118 TEST_HEADER include/spdk/endian.h 00:04:33.118 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.118 TEST_HEADER include/spdk/env.h 00:04:33.118 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.118 TEST_HEADER include/spdk/event.h 00:04:33.118 TEST_HEADER include/spdk/fd_group.h 00:04:33.118 TEST_HEADER include/spdk/fd.h 00:04:33.118 TEST_HEADER include/spdk/file.h 00:04:33.118 TEST_HEADER include/spdk/fsdev.h 00:04:33.118 TEST_HEADER include/spdk/fsdev_module.h 00:04:33.118 TEST_HEADER include/spdk/ftl.h 00:04:33.118 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:33.118 TEST_HEADER include/spdk/gpt_spec.h 00:04:33.118 TEST_HEADER include/spdk/hexlify.h 00:04:33.118 CC examples/ioat/perf/perf.o 00:04:33.118 TEST_HEADER include/spdk/histogram_data.h 00:04:33.118 TEST_HEADER include/spdk/idxd.h 00:04:33.118 TEST_HEADER include/spdk/idxd_spec.h 00:04:33.118 TEST_HEADER include/spdk/init.h 00:04:33.118 TEST_HEADER include/spdk/ioat.h 00:04:33.118 TEST_HEADER include/spdk/ioat_spec.h 00:04:33.118 CC test/thread/poller_perf/poller_perf.o 00:04:33.118 TEST_HEADER include/spdk/iscsi_spec.h 00:04:33.118 TEST_HEADER include/spdk/json.h 00:04:33.118 TEST_HEADER include/spdk/jsonrpc.h 00:04:33.118 TEST_HEADER include/spdk/keyring.h 00:04:33.118 TEST_HEADER include/spdk/keyring_module.h 00:04:33.118 TEST_HEADER include/spdk/likely.h 00:04:33.118 CC examples/util/zipf/zipf.o 00:04:33.118 TEST_HEADER include/spdk/log.h 00:04:33.118 TEST_HEADER include/spdk/lvol.h 00:04:33.118 TEST_HEADER include/spdk/md5.h 00:04:33.118 TEST_HEADER include/spdk/memory.h 00:04:33.118 TEST_HEADER include/spdk/mmio.h 00:04:33.118 TEST_HEADER include/spdk/nbd.h 00:04:33.118 TEST_HEADER include/spdk/net.h 00:04:33.118 TEST_HEADER include/spdk/notify.h 00:04:33.118 TEST_HEADER include/spdk/nvme.h 00:04:33.118 TEST_HEADER include/spdk/nvme_intel.h 00:04:33.118 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:33.118 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:33.118 TEST_HEADER include/spdk/nvme_spec.h 00:04:33.118 TEST_HEADER include/spdk/nvme_zns.h 00:04:33.118 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:33.118 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:33.118 TEST_HEADER include/spdk/nvmf.h 00:04:33.118 TEST_HEADER include/spdk/nvmf_spec.h 00:04:33.118 TEST_HEADER include/spdk/nvmf_transport.h 00:04:33.118 TEST_HEADER include/spdk/opal.h 00:04:33.118 TEST_HEADER include/spdk/opal_spec.h 00:04:33.118 TEST_HEADER include/spdk/pci_ids.h 00:04:33.118 TEST_HEADER include/spdk/pipe.h 00:04:33.118 CC test/app/bdev_svc/bdev_svc.o 00:04:33.118 TEST_HEADER include/spdk/queue.h 00:04:33.118 TEST_HEADER include/spdk/reduce.h 00:04:33.118 CC test/dma/test_dma/test_dma.o 00:04:33.118 TEST_HEADER include/spdk/rpc.h 00:04:33.118 TEST_HEADER include/spdk/scheduler.h 00:04:33.118 TEST_HEADER include/spdk/scsi.h 00:04:33.118 TEST_HEADER include/spdk/scsi_spec.h 00:04:33.118 TEST_HEADER include/spdk/sock.h 00:04:33.118 TEST_HEADER include/spdk/stdinc.h 00:04:33.118 TEST_HEADER include/spdk/string.h 00:04:33.118 TEST_HEADER include/spdk/thread.h 00:04:33.118 TEST_HEADER include/spdk/trace.h 00:04:33.118 TEST_HEADER include/spdk/trace_parser.h 00:04:33.118 TEST_HEADER include/spdk/tree.h 00:04:33.118 TEST_HEADER include/spdk/ublk.h 00:04:33.118 TEST_HEADER include/spdk/util.h 00:04:33.118 TEST_HEADER include/spdk/uuid.h 00:04:33.118 TEST_HEADER include/spdk/version.h 00:04:33.118 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.118 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.118 CC test/env/mem_callbacks/mem_callbacks.o 00:04:33.119 TEST_HEADER include/spdk/vhost.h 00:04:33.119 TEST_HEADER include/spdk/vmd.h 00:04:33.119 TEST_HEADER include/spdk/xor.h 00:04:33.119 TEST_HEADER include/spdk/zipf.h 00:04:33.119 CXX test/cpp_headers/accel.o 00:04:33.119 LINK rpc_client_test 00:04:33.119 LINK interrupt_tgt 00:04:33.119 LINK poller_perf 00:04:33.119 LINK zipf 00:04:33.378 LINK ioat_perf 00:04:33.378 LINK bdev_svc 00:04:33.378 CXX test/cpp_headers/accel_module.o 00:04:33.378 CXX test/cpp_headers/assert.o 00:04:33.378 CXX test/cpp_headers/barrier.o 00:04:33.378 LINK spdk_trace 00:04:33.378 CC app/trace_record/trace_record.o 00:04:33.638 CC examples/ioat/verify/verify.o 00:04:33.638 CXX test/cpp_headers/base64.o 00:04:33.638 CXX test/cpp_headers/bdev.o 00:04:33.638 CC test/event/event_perf/event_perf.o 00:04:33.638 CC app/nvmf_tgt/nvmf_main.o 00:04:33.638 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.638 LINK test_dma 00:04:33.638 LINK mem_callbacks 00:04:33.638 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.638 LINK event_perf 00:04:33.638 CXX test/cpp_headers/bdev_module.o 00:04:33.898 LINK spdk_trace_record 00:04:33.898 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.898 LINK verify 00:04:33.898 LINK nvmf_tgt 00:04:33.898 LINK iscsi_tgt 00:04:33.898 CC test/env/vtophys/vtophys.o 00:04:33.898 CC test/event/reactor/reactor.o 00:04:33.898 CXX test/cpp_headers/bdev_zone.o 00:04:33.898 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:34.157 LINK vtophys 00:04:34.157 LINK reactor 00:04:34.157 CC test/accel/dif/dif.o 00:04:34.157 CC examples/thread/thread/thread_ex.o 00:04:34.157 LINK env_dpdk_post_init 00:04:34.157 CXX test/cpp_headers/bit_array.o 00:04:34.157 LINK nvme_fuzz 00:04:34.157 CC examples/sock/hello_world/hello_sock.o 00:04:34.157 CC app/spdk_tgt/spdk_tgt.o 00:04:34.417 CXX test/cpp_headers/bit_pool.o 00:04:34.417 CC test/event/reactor_perf/reactor_perf.o 00:04:34.417 LINK thread 00:04:34.417 CC examples/vmd/lsvmd/lsvmd.o 00:04:34.417 CC test/env/memory/memory_ut.o 00:04:34.417 CC examples/vmd/led/led.o 00:04:34.417 LINK spdk_tgt 00:04:34.417 CXX test/cpp_headers/blob_bdev.o 00:04:34.417 LINK reactor_perf 00:04:34.417 LINK hello_sock 00:04:34.417 LINK lsvmd 00:04:34.417 LINK led 00:04:34.677 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.677 CXX test/cpp_headers/blobfs.o 00:04:34.677 CC app/spdk_lspci/spdk_lspci.o 00:04:34.677 CC test/event/app_repeat/app_repeat.o 00:04:34.677 CC app/spdk_nvme_perf/perf.o 00:04:34.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.677 LINK spdk_lspci 00:04:34.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.937 CXX test/cpp_headers/blob.o 00:04:34.937 CC examples/idxd/perf/perf.o 00:04:34.937 LINK app_repeat 00:04:34.937 CXX test/cpp_headers/conf.o 00:04:34.937 LINK dif 00:04:34.937 CXX test/cpp_headers/config.o 00:04:34.937 CXX test/cpp_headers/cpuset.o 00:04:34.937 CC test/app/histogram_perf/histogram_perf.o 00:04:34.937 CXX test/cpp_headers/crc16.o 00:04:35.197 CC test/app/jsoncat/jsoncat.o 00:04:35.197 CC test/event/scheduler/scheduler.o 00:04:35.197 LINK histogram_perf 00:04:35.197 LINK idxd_perf 00:04:35.197 LINK jsoncat 00:04:35.197 LINK vhost_fuzz 00:04:35.197 CXX test/cpp_headers/crc32.o 00:04:35.197 CC test/env/pci/pci_ut.o 00:04:35.456 CXX test/cpp_headers/crc64.o 00:04:35.456 LINK scheduler 00:04:35.456 CC test/app/stub/stub.o 00:04:35.456 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.456 CXX test/cpp_headers/dif.o 00:04:35.456 LINK memory_ut 00:04:35.456 CC examples/accel/perf/accel_perf.o 00:04:35.456 CC test/blobfs/mkfs/mkfs.o 00:04:35.456 LINK stub 00:04:35.456 LINK spdk_nvme_perf 00:04:35.715 LINK iscsi_fuzz 00:04:35.715 CXX test/cpp_headers/dma.o 00:04:35.715 LINK pci_ut 00:04:35.715 LINK mkfs 00:04:35.715 LINK hello_fsdev 00:04:35.715 CC test/lvol/esnap/esnap.o 00:04:35.715 CXX test/cpp_headers/endian.o 00:04:35.716 CC app/spdk_top/spdk_top.o 00:04:35.716 CC app/spdk_nvme_identify/identify.o 00:04:35.716 CC app/spdk_nvme_discover/discovery_aer.o 00:04:35.974 CXX test/cpp_headers/env_dpdk.o 00:04:35.974 CXX test/cpp_headers/env.o 00:04:35.974 CXX test/cpp_headers/event.o 00:04:35.974 CC app/vhost/vhost.o 00:04:35.974 LINK spdk_nvme_discover 00:04:35.974 LINK accel_perf 00:04:35.974 CC app/spdk_dd/spdk_dd.o 00:04:35.974 CXX test/cpp_headers/fd_group.o 00:04:36.233 LINK vhost 00:04:36.233 CC app/fio/nvme/fio_plugin.o 00:04:36.233 CC test/nvme/aer/aer.o 00:04:36.233 CXX test/cpp_headers/fd.o 00:04:36.233 CC app/fio/bdev/fio_plugin.o 00:04:36.233 CC examples/blob/hello_world/hello_blob.o 00:04:36.493 CXX test/cpp_headers/file.o 00:04:36.493 LINK spdk_dd 00:04:36.493 CC examples/nvme/hello_world/hello_world.o 00:04:36.493 LINK aer 00:04:36.493 CXX test/cpp_headers/fsdev.o 00:04:36.493 LINK hello_blob 00:04:36.753 LINK hello_world 00:04:36.753 CC examples/nvme/reconnect/reconnect.o 00:04:36.753 LINK spdk_nvme_identify 00:04:36.753 CXX test/cpp_headers/fsdev_module.o 00:04:36.753 CC test/nvme/reset/reset.o 00:04:36.753 LINK spdk_top 00:04:36.753 LINK spdk_nvme 00:04:36.753 LINK spdk_bdev 00:04:36.753 CC examples/blob/cli/blobcli.o 00:04:36.753 CXX test/cpp_headers/ftl.o 00:04:37.012 CC test/nvme/sgl/sgl.o 00:04:37.012 CC test/nvme/overhead/overhead.o 00:04:37.012 CC test/nvme/e2edp/nvme_dp.o 00:04:37.012 CC test/nvme/err_injection/err_injection.o 00:04:37.012 LINK reset 00:04:37.012 CC test/nvme/startup/startup.o 00:04:37.012 LINK reconnect 00:04:37.012 CXX test/cpp_headers/fuse_dispatcher.o 00:04:37.012 LINK err_injection 00:04:37.271 LINK startup 00:04:37.271 LINK sgl 00:04:37.271 CXX test/cpp_headers/gpt_spec.o 00:04:37.271 LINK nvme_dp 00:04:37.271 LINK overhead 00:04:37.271 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.271 CXX test/cpp_headers/hexlify.o 00:04:37.271 LINK blobcli 00:04:37.271 CC examples/bdev/hello_world/hello_bdev.o 00:04:37.271 CC test/nvme/reserve/reserve.o 00:04:37.271 CXX test/cpp_headers/histogram_data.o 00:04:37.271 CXX test/cpp_headers/idxd.o 00:04:37.271 CC test/nvme/simple_copy/simple_copy.o 00:04:37.531 CC test/nvme/connect_stress/connect_stress.o 00:04:37.531 CXX test/cpp_headers/idxd_spec.o 00:04:37.531 CC test/nvme/boot_partition/boot_partition.o 00:04:37.531 LINK reserve 00:04:37.531 CC test/nvme/compliance/nvme_compliance.o 00:04:37.531 LINK hello_bdev 00:04:37.531 LINK connect_stress 00:04:37.531 LINK simple_copy 00:04:37.812 CXX test/cpp_headers/init.o 00:04:37.812 CC test/bdev/bdevio/bdevio.o 00:04:37.812 LINK boot_partition 00:04:37.812 CXX test/cpp_headers/ioat.o 00:04:37.812 LINK nvme_manage 00:04:37.812 CXX test/cpp_headers/ioat_spec.o 00:04:37.812 CXX test/cpp_headers/iscsi_spec.o 00:04:37.812 CXX test/cpp_headers/json.o 00:04:37.812 CC examples/bdev/bdevperf/bdevperf.o 00:04:37.812 CC examples/nvme/arbitration/arbitration.o 00:04:38.085 LINK nvme_compliance 00:04:38.085 CC test/nvme/fused_ordering/fused_ordering.o 00:04:38.085 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.085 CC examples/nvme/hotplug/hotplug.o 00:04:38.085 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:38.085 CXX test/cpp_headers/jsonrpc.o 00:04:38.085 LINK bdevio 00:04:38.085 LINK doorbell_aers 00:04:38.085 LINK fused_ordering 00:04:38.085 LINK cmb_copy 00:04:38.085 CC test/nvme/fdp/fdp.o 00:04:38.085 CXX test/cpp_headers/keyring.o 00:04:38.085 LINK hotplug 00:04:38.345 LINK arbitration 00:04:38.345 CXX test/cpp_headers/keyring_module.o 00:04:38.345 CXX test/cpp_headers/likely.o 00:04:38.345 CC test/nvme/cuse/cuse.o 00:04:38.345 CXX test/cpp_headers/log.o 00:04:38.345 CC examples/nvme/abort/abort.o 00:04:38.345 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.345 CXX test/cpp_headers/lvol.o 00:04:38.605 CXX test/cpp_headers/memory.o 00:04:38.605 CXX test/cpp_headers/md5.o 00:04:38.605 CXX test/cpp_headers/mmio.o 00:04:38.605 LINK fdp 00:04:38.605 CXX test/cpp_headers/nbd.o 00:04:38.605 LINK pmr_persistence 00:04:38.605 CXX test/cpp_headers/net.o 00:04:38.605 CXX test/cpp_headers/notify.o 00:04:38.605 CXX test/cpp_headers/nvme.o 00:04:38.605 CXX test/cpp_headers/nvme_intel.o 00:04:38.605 CXX test/cpp_headers/nvme_ocssd.o 00:04:38.865 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:38.865 LINK bdevperf 00:04:38.865 LINK abort 00:04:38.865 CXX test/cpp_headers/nvme_spec.o 00:04:38.865 CXX test/cpp_headers/nvme_zns.o 00:04:38.865 CXX test/cpp_headers/nvmf_cmd.o 00:04:38.865 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:38.865 CXX test/cpp_headers/nvmf.o 00:04:38.865 CXX test/cpp_headers/nvmf_spec.o 00:04:38.865 CXX test/cpp_headers/nvmf_transport.o 00:04:38.865 CXX test/cpp_headers/opal.o 00:04:38.865 CXX test/cpp_headers/opal_spec.o 00:04:39.124 CXX test/cpp_headers/pci_ids.o 00:04:39.125 CXX test/cpp_headers/pipe.o 00:04:39.125 CXX test/cpp_headers/queue.o 00:04:39.125 CXX test/cpp_headers/reduce.o 00:04:39.125 CXX test/cpp_headers/rpc.o 00:04:39.125 CXX test/cpp_headers/scheduler.o 00:04:39.125 CXX test/cpp_headers/scsi.o 00:04:39.125 CXX test/cpp_headers/scsi_spec.o 00:04:39.125 CC examples/nvmf/nvmf/nvmf.o 00:04:39.125 CXX test/cpp_headers/sock.o 00:04:39.125 CXX test/cpp_headers/stdinc.o 00:04:39.125 CXX test/cpp_headers/string.o 00:04:39.125 CXX test/cpp_headers/thread.o 00:04:39.384 CXX test/cpp_headers/trace.o 00:04:39.384 CXX test/cpp_headers/trace_parser.o 00:04:39.384 CXX test/cpp_headers/tree.o 00:04:39.384 CXX test/cpp_headers/ublk.o 00:04:39.384 CXX test/cpp_headers/util.o 00:04:39.384 CXX test/cpp_headers/uuid.o 00:04:39.384 CXX test/cpp_headers/version.o 00:04:39.384 CXX test/cpp_headers/vfio_user_pci.o 00:04:39.384 CXX test/cpp_headers/vfio_user_spec.o 00:04:39.384 CXX test/cpp_headers/vhost.o 00:04:39.384 CXX test/cpp_headers/vmd.o 00:04:39.384 LINK nvmf 00:04:39.384 CXX test/cpp_headers/xor.o 00:04:39.384 CXX test/cpp_headers/zipf.o 00:04:39.643 LINK cuse 00:04:41.551 LINK esnap 00:04:41.551 00:04:41.551 real 1m15.029s 00:04:41.551 user 5m58.722s 00:04:41.551 sys 1m7.270s 00:04:41.551 04:50:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:41.551 04:50:58 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.551 ************************************ 00:04:41.551 END TEST make 00:04:41.551 ************************************ 00:04:41.551 04:50:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.551 04:50:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.551 04:50:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.551 04:50:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.551 04:50:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.551 04:50:58 -- pm/common@44 -- $ pid=6202 00:04:41.551 04:50:58 -- pm/common@50 -- $ kill -TERM 6202 00:04:41.551 04:50:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.551 04:50:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.551 04:50:58 -- pm/common@44 -- $ pid=6204 00:04:41.551 04:50:58 -- pm/common@50 -- $ kill -TERM 6204 00:04:41.812 04:50:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:41.812 04:50:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:41.812 04:50:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.812 04:50:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.812 04:50:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.812 04:50:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.812 04:50:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.812 04:50:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.812 04:50:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.812 04:50:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.812 04:50:58 -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.812 04:50:58 -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.812 04:50:58 -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.812 04:50:58 -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.812 04:50:58 -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.812 04:50:58 -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.812 04:50:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.812 04:50:58 -- scripts/common.sh@344 -- # case "$op" in 00:04:41.812 04:50:58 -- scripts/common.sh@345 -- # : 1 00:04:41.812 04:50:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.812 04:50:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.812 04:50:58 -- scripts/common.sh@365 -- # decimal 1 00:04:41.812 04:50:58 -- scripts/common.sh@353 -- # local d=1 00:04:41.812 04:50:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.812 04:50:58 -- scripts/common.sh@355 -- # echo 1 00:04:41.812 04:50:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.812 04:50:58 -- scripts/common.sh@366 -- # decimal 2 00:04:41.812 04:50:58 -- scripts/common.sh@353 -- # local d=2 00:04:41.812 04:50:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.812 04:50:58 -- scripts/common.sh@355 -- # echo 2 00:04:41.812 04:50:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.812 04:50:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.812 04:50:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.812 04:50:58 -- scripts/common.sh@368 -- # return 0 00:04:41.812 04:50:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.812 04:50:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.812 --rc genhtml_branch_coverage=1 00:04:41.812 --rc genhtml_function_coverage=1 00:04:41.812 --rc genhtml_legend=1 00:04:41.812 --rc geninfo_all_blocks=1 00:04:41.812 --rc geninfo_unexecuted_blocks=1 00:04:41.812 00:04:41.812 ' 00:04:41.812 04:50:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.812 --rc genhtml_branch_coverage=1 00:04:41.812 --rc genhtml_function_coverage=1 00:04:41.812 --rc genhtml_legend=1 00:04:41.812 --rc geninfo_all_blocks=1 00:04:41.812 --rc geninfo_unexecuted_blocks=1 00:04:41.812 00:04:41.812 ' 00:04:41.812 04:50:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.812 --rc genhtml_branch_coverage=1 00:04:41.812 --rc genhtml_function_coverage=1 00:04:41.812 --rc genhtml_legend=1 00:04:41.812 --rc geninfo_all_blocks=1 00:04:41.812 --rc geninfo_unexecuted_blocks=1 00:04:41.812 00:04:41.812 ' 00:04:41.812 04:50:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.812 --rc genhtml_branch_coverage=1 00:04:41.812 --rc genhtml_function_coverage=1 00:04:41.812 --rc genhtml_legend=1 00:04:41.812 --rc geninfo_all_blocks=1 00:04:41.812 --rc geninfo_unexecuted_blocks=1 00:04:41.812 00:04:41.812 ' 00:04:41.812 04:50:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.812 04:50:58 -- nvmf/common.sh@7 -- # uname -s 00:04:41.812 04:50:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.812 04:50:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.812 04:50:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.812 04:50:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.812 04:50:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.812 04:50:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.812 04:50:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.812 04:50:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.812 04:50:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.812 04:50:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.812 04:50:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:04:41.812 04:50:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:04:41.812 04:50:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.812 04:50:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.812 04:50:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.812 04:50:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.812 04:50:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.812 04:50:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.812 04:50:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.812 04:50:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.812 04:50:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.812 04:50:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.812 04:50:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.812 04:50:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.812 04:50:58 -- paths/export.sh@5 -- # export PATH 00:04:41.812 04:50:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.812 04:50:58 -- nvmf/common.sh@51 -- # : 0 00:04:41.812 04:50:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.812 04:50:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.812 04:50:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.812 04:50:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.812 04:50:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.812 04:50:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.812 04:50:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.812 04:50:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.812 04:50:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.071 04:50:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.071 04:50:58 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.072 04:50:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.072 04:50:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.072 04:50:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.072 04:50:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.072 04:50:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.072 04:50:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.072 04:50:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.072 04:50:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.072 04:50:58 -- spdk/autotest.sh@48 -- # udevadm_pid=66849 00:04:42.072 04:50:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.072 04:50:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:42.072 04:50:58 -- pm/common@17 -- # local monitor 00:04:42.072 04:50:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.072 04:50:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.072 04:50:58 -- pm/common@25 -- # sleep 1 00:04:42.072 04:50:58 -- pm/common@21 -- # date +%s 00:04:42.072 04:50:58 -- pm/common@21 -- # date +%s 00:04:42.072 04:50:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732164658 00:04:42.072 04:50:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732164658 00:04:42.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732164658_collect-cpu-load.pm.log 00:04:42.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732164658_collect-vmstat.pm.log 00:04:43.019 04:50:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.019 04:50:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.019 04:50:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.019 04:50:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.019 04:50:59 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.019 04:50:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:43.019 04:50:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.019 04:50:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:43.019 04:50:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:43.019 04:50:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:43.019 04:50:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:43.019 04:50:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:43.019 04:50:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.019 04:50:59 -- common/autotest_common.sh@1457 -- # uname 00:04:43.019 04:50:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:43.019 04:50:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.019 04:50:59 -- common/autotest_common.sh@1477 -- # uname 00:04:43.019 04:50:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:43.019 04:50:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.019 04:50:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.279 lcov: LCOV version 1.15 00:04:43.279 04:50:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:58.173 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:58.173 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:13.085 04:51:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:13.085 04:51:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.085 04:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.085 04:51:27 -- spdk/autotest.sh@78 -- # rm -f 00:05:13.085 04:51:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.085 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:13.085 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:13.085 04:51:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:13.085 04:51:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:13.085 04:51:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:13.085 04:51:28 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:13.085 04:51:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.085 04:51:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:13.085 04:51:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:13.085 04:51:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.085 04:51:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:13.085 04:51:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:13.085 04:51:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.085 04:51:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:13.085 04:51:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:13.085 04:51:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.085 04:51:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:13.085 04:51:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:13.085 04:51:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.085 04:51:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.085 04:51:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:13.085 04:51:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.085 04:51:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.085 04:51:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:13.085 04:51:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:13.085 04:51:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:13.085 No valid GPT data, bailing 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # pt= 00:05:13.085 04:51:28 -- scripts/common.sh@395 -- # return 1 00:05:13.085 04:51:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:13.085 1+0 records in 00:05:13.085 1+0 records out 00:05:13.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662208 s, 158 MB/s 00:05:13.085 04:51:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.085 04:51:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.085 04:51:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:13.085 04:51:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:13.085 04:51:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:13.085 No valid GPT data, bailing 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # pt= 00:05:13.085 04:51:28 -- scripts/common.sh@395 -- # return 1 00:05:13.085 04:51:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:13.085 1+0 records in 00:05:13.085 1+0 records out 00:05:13.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608156 s, 172 MB/s 00:05:13.085 04:51:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.085 04:51:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.085 04:51:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:13.085 04:51:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:13.085 04:51:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:13.085 No valid GPT data, bailing 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # pt= 00:05:13.085 04:51:28 -- scripts/common.sh@395 -- # return 1 00:05:13.085 04:51:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:13.085 1+0 records in 00:05:13.085 1+0 records out 00:05:13.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652312 s, 161 MB/s 00:05:13.085 04:51:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.085 04:51:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.085 04:51:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:13.085 04:51:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:13.085 04:51:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:13.085 No valid GPT data, bailing 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:13.085 04:51:28 -- scripts/common.sh@394 -- # pt= 00:05:13.085 04:51:28 -- scripts/common.sh@395 -- # return 1 00:05:13.085 04:51:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:13.085 1+0 records in 00:05:13.085 1+0 records out 00:05:13.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617633 s, 170 MB/s 00:05:13.085 04:51:28 -- spdk/autotest.sh@105 -- # sync 00:05:13.085 04:51:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:13.085 04:51:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:13.085 04:51:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:15.025 04:51:31 -- spdk/autotest.sh@111 -- # uname -s 00:05:15.025 04:51:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:15.025 04:51:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:15.025 04:51:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.965 Hugepages 00:05:15.965 node hugesize free / total 00:05:15.965 node0 1048576kB 0 / 0 00:05:15.965 node0 2048kB 0 / 0 00:05:15.965 00:05:15.965 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.965 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:15.965 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:16.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:16.225 04:51:32 -- spdk/autotest.sh@117 -- # uname -s 00:05:16.225 04:51:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:16.225 04:51:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:16.225 04:51:32 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.052 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.052 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.052 04:51:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:18.428 04:51:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:18.428 04:51:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:18.428 04:51:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.429 04:51:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:18.429 04:51:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:18.429 04:51:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:18.429 04:51:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.429 04:51:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.429 04:51:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:18.429 04:51:34 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:18.429 04:51:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:18.429 04:51:34 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.687 Waiting for block devices as requested 00:05:18.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.946 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.946 04:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:18.946 04:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:18.946 04:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.946 04:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:18.947 04:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:18.947 04:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:18.947 04:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:18.947 04:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:18.947 04:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:18.947 04:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1543 -- # continue 00:05:18.947 04:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:18.947 04:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.947 04:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:18.947 04:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:18.947 04:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:19.206 04:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:19.206 04:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:19.206 04:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:19.206 04:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:19.206 04:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:19.206 04:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:19.206 04:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:19.206 04:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:19.206 04:51:35 -- common/autotest_common.sh@1543 -- # continue 00:05:19.206 04:51:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:19.206 04:51:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.206 04:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.206 04:51:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:19.206 04:51:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.206 04:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.206 04:51:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.034 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.034 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.034 04:51:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:20.034 04:51:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.034 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.292 04:51:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:20.292 04:51:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:20.292 04:51:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.292 04:51:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:20.292 04:51:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:20.292 04:51:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:20.292 04:51:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:20.293 04:51:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:20.293 04:51:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:20.293 04:51:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:20.293 04:51:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.293 04:51:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.293 04:51:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:20.293 04:51:36 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:20.293 04:51:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:20.293 04:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:20.293 04:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:20.293 04:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:20.293 04:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.293 04:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:20.293 04:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:20.293 04:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:20.293 04:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.293 04:51:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:20.293 04:51:36 -- common/autotest_common.sh@1572 -- # return 0 00:05:20.293 04:51:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:20.293 04:51:36 -- common/autotest_common.sh@1580 -- # return 0 00:05:20.293 04:51:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:20.293 04:51:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:20.293 04:51:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.293 04:51:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.293 04:51:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:20.293 04:51:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.293 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 04:51:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:20.293 04:51:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.293 04:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.293 04:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.293 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 ************************************ 00:05:20.293 START TEST env 00:05:20.293 ************************************ 00:05:20.293 04:51:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.552 * Looking for test storage... 00:05:20.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:20.552 04:51:37 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.552 04:51:37 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.552 04:51:37 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.552 04:51:37 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.552 04:51:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.552 04:51:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.552 04:51:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.552 04:51:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.552 04:51:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.552 04:51:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.552 04:51:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.552 04:51:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.552 04:51:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.552 04:51:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.552 04:51:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.552 04:51:37 env -- scripts/common.sh@344 -- # case "$op" in 00:05:20.552 04:51:37 env -- scripts/common.sh@345 -- # : 1 00:05:20.552 04:51:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.552 04:51:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.552 04:51:37 env -- scripts/common.sh@365 -- # decimal 1 00:05:20.552 04:51:37 env -- scripts/common.sh@353 -- # local d=1 00:05:20.552 04:51:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.552 04:51:37 env -- scripts/common.sh@355 -- # echo 1 00:05:20.552 04:51:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.552 04:51:37 env -- scripts/common.sh@366 -- # decimal 2 00:05:20.552 04:51:37 env -- scripts/common.sh@353 -- # local d=2 00:05:20.552 04:51:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.552 04:51:37 env -- scripts/common.sh@355 -- # echo 2 00:05:20.552 04:51:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.553 04:51:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.553 04:51:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.553 04:51:37 env -- scripts/common.sh@368 -- # return 0 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.553 --rc genhtml_branch_coverage=1 00:05:20.553 --rc genhtml_function_coverage=1 00:05:20.553 --rc genhtml_legend=1 00:05:20.553 --rc geninfo_all_blocks=1 00:05:20.553 --rc geninfo_unexecuted_blocks=1 00:05:20.553 00:05:20.553 ' 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.553 --rc genhtml_branch_coverage=1 00:05:20.553 --rc genhtml_function_coverage=1 00:05:20.553 --rc genhtml_legend=1 00:05:20.553 --rc geninfo_all_blocks=1 00:05:20.553 --rc geninfo_unexecuted_blocks=1 00:05:20.553 00:05:20.553 ' 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.553 --rc genhtml_branch_coverage=1 00:05:20.553 --rc genhtml_function_coverage=1 00:05:20.553 --rc genhtml_legend=1 00:05:20.553 --rc geninfo_all_blocks=1 00:05:20.553 --rc geninfo_unexecuted_blocks=1 00:05:20.553 00:05:20.553 ' 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.553 --rc genhtml_branch_coverage=1 00:05:20.553 --rc genhtml_function_coverage=1 00:05:20.553 --rc genhtml_legend=1 00:05:20.553 --rc geninfo_all_blocks=1 00:05:20.553 --rc geninfo_unexecuted_blocks=1 00:05:20.553 00:05:20.553 ' 00:05:20.553 04:51:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.553 04:51:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.553 04:51:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.553 ************************************ 00:05:20.553 START TEST env_memory 00:05:20.553 ************************************ 00:05:20.553 04:51:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.553 00:05:20.553 00:05:20.553 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.553 http://cunit.sourceforge.net/ 00:05:20.553 00:05:20.553 00:05:20.553 Suite: memory 00:05:20.553 Test: alloc and free memory map ...[2024-11-21 04:51:37.247721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.812 passed 00:05:20.812 Test: mem map translation ...[2024-11-21 04:51:37.294183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.812 [2024-11-21 04:51:37.294267] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.812 [2024-11-21 04:51:37.294335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.812 [2024-11-21 04:51:37.294371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.812 passed 00:05:20.812 Test: mem map registration ...[2024-11-21 04:51:37.362512] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:20.812 [2024-11-21 04:51:37.362609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:20.812 passed 00:05:20.812 Test: mem map adjacent registrations ...passed 00:05:20.812 00:05:20.812 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.812 suites 1 1 n/a 0 0 00:05:20.812 tests 4 4 4 0 0 00:05:20.812 asserts 152 152 152 0 n/a 00:05:20.812 00:05:20.812 Elapsed time = 0.249 seconds 00:05:20.812 00:05:20.812 real 0m0.303s 00:05:20.812 user 0m0.253s 00:05:20.812 sys 0m0.038s 00:05:20.812 04:51:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.812 04:51:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.812 ************************************ 00:05:20.812 END TEST env_memory 00:05:20.812 ************************************ 00:05:20.812 04:51:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.812 04:51:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.812 04:51:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.812 04:51:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.812 ************************************ 00:05:20.812 START TEST env_vtophys 00:05:20.812 ************************************ 00:05:20.812 04:51:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:21.071 EAL: lib.eal log level changed from notice to debug 00:05:21.071 EAL: Detected lcore 0 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 1 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 2 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 3 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 4 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 5 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 6 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 7 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 8 as core 0 on socket 0 00:05:21.071 EAL: Detected lcore 9 as core 0 on socket 0 00:05:21.071 EAL: Maximum logical cores by configuration: 128 00:05:21.071 EAL: Detected CPU lcores: 10 00:05:21.071 EAL: Detected NUMA nodes: 1 00:05:21.071 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:21.071 EAL: Detected shared linkage of DPDK 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:21.071 EAL: Registered [vdev] bus. 00:05:21.071 EAL: bus.vdev log level changed from disabled to notice 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:21.071 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:21.071 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:21.071 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:21.071 EAL: No shared files mode enabled, IPC will be disabled 00:05:21.071 EAL: No shared files mode enabled, IPC is disabled 00:05:21.071 EAL: Selected IOVA mode 'PA' 00:05:21.071 EAL: Probing VFIO support... 00:05:21.071 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:21.071 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:21.071 EAL: Ask a virtual area of 0x2e000 bytes 00:05:21.071 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:21.071 EAL: Setting up physically contiguous memory... 00:05:21.071 EAL: Setting maximum number of open files to 524288 00:05:21.071 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:21.071 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:21.071 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.072 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:21.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.072 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.072 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:21.072 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:21.072 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.072 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:21.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.072 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.072 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:21.072 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:21.072 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.072 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:21.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.072 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.072 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:21.072 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:21.072 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.072 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:21.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.072 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.072 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:21.072 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:21.072 EAL: Hugepages will be freed exactly as allocated. 00:05:21.072 EAL: No shared files mode enabled, IPC is disabled 00:05:21.072 EAL: No shared files mode enabled, IPC is disabled 00:05:21.072 EAL: TSC frequency is ~2290000 KHz 00:05:21.072 EAL: Main lcore 0 is ready (tid=7fa60f465a40;cpuset=[0]) 00:05:21.072 EAL: Trying to obtain current memory policy. 00:05:21.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.072 EAL: Restoring previous memory policy: 0 00:05:21.072 EAL: request: mp_malloc_sync 00:05:21.072 EAL: No shared files mode enabled, IPC is disabled 00:05:21.072 EAL: Heap on socket 0 was expanded by 2MB 00:05:21.072 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:21.072 EAL: No shared files mode enabled, IPC is disabled 00:05:21.072 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:21.072 EAL: Mem event callback 'spdk:(nil)' registered 00:05:21.072 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:21.072 00:05:21.072 00:05:21.072 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.072 http://cunit.sourceforge.net/ 00:05:21.072 00:05:21.072 00:05:21.072 Suite: components_suite 00:05:21.640 Test: vtophys_malloc_test ...passed 00:05:21.640 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 4MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 4MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 6MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 6MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 10MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 10MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 18MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 18MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 34MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 34MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 66MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 66MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 130MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 130MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.640 EAL: Restoring previous memory policy: 4 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was expanded by 258MB 00:05:21.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.640 EAL: request: mp_malloc_sync 00:05:21.640 EAL: No shared files mode enabled, IPC is disabled 00:05:21.640 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.640 EAL: Trying to obtain current memory policy. 00:05:21.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.899 EAL: Restoring previous memory policy: 4 00:05:21.899 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.899 EAL: request: mp_malloc_sync 00:05:21.899 EAL: No shared files mode enabled, IPC is disabled 00:05:21.899 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.899 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.159 EAL: request: mp_malloc_sync 00:05:22.159 EAL: No shared files mode enabled, IPC is disabled 00:05:22.159 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.159 EAL: Trying to obtain current memory policy. 00:05:22.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.159 EAL: Restoring previous memory policy: 4 00:05:22.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.159 EAL: request: mp_malloc_sync 00:05:22.159 EAL: No shared files mode enabled, IPC is disabled 00:05:22.159 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.677 passed 00:05:22.677 00:05:22.677 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.677 suites 1 1 n/a 0 0 00:05:22.677 tests 2 2 2 0 0 00:05:22.677 asserts 5358 5358 5358 0 n/a 00:05:22.677 00:05:22.677 Elapsed time = 1.385 secondsEAL: request: mp_malloc_sync 00:05:22.677 EAL: No shared files mode enabled, IPC is disabled 00:05:22.677 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.677 00:05:22.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.677 EAL: request: mp_malloc_sync 00:05:22.677 EAL: No shared files mode enabled, IPC is disabled 00:05:22.677 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.677 EAL: No shared files mode enabled, IPC is disabled 00:05:22.677 EAL: No shared files mode enabled, IPC is disabled 00:05:22.677 EAL: No shared files mode enabled, IPC is disabled 00:05:22.677 00:05:22.677 real 0m1.660s 00:05:22.677 user 0m0.771s 00:05:22.677 sys 0m0.754s 00:05:22.677 04:51:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.677 04:51:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.677 ************************************ 00:05:22.677 END TEST env_vtophys 00:05:22.677 ************************************ 00:05:22.677 04:51:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.677 04:51:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.677 04:51:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.677 04:51:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.677 ************************************ 00:05:22.677 START TEST env_pci 00:05:22.677 ************************************ 00:05:22.677 04:51:39 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.677 00:05:22.677 00:05:22.677 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.677 http://cunit.sourceforge.net/ 00:05:22.677 00:05:22.677 00:05:22.677 Suite: pci 00:05:22.677 Test: pci_hook ...[2024-11-21 04:51:39.307256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69089 has claimed it 00:05:22.677 EAL: Cannot find device (10000:00:01.0) 00:05:22.677 EAL: Failed to attach device on primary process 00:05:22.677 passed 00:05:22.677 00:05:22.677 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.677 suites 1 1 n/a 0 0 00:05:22.677 tests 1 1 1 0 0 00:05:22.677 asserts 25 25 25 0 n/a 00:05:22.677 00:05:22.677 Elapsed time = 0.010 seconds 00:05:22.677 00:05:22.677 real 0m0.103s 00:05:22.677 user 0m0.049s 00:05:22.677 sys 0m0.053s 00:05:22.677 04:51:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.677 04:51:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.677 ************************************ 00:05:22.677 END TEST env_pci 00:05:22.677 ************************************ 00:05:22.935 04:51:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.935 04:51:39 env -- env/env.sh@15 -- # uname 00:05:22.935 04:51:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.935 04:51:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.935 04:51:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.935 04:51:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:22.935 04:51:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.935 04:51:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.935 ************************************ 00:05:22.935 START TEST env_dpdk_post_init 00:05:22.935 ************************************ 00:05:22.935 04:51:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.935 EAL: Detected CPU lcores: 10 00:05:22.935 EAL: Detected NUMA nodes: 1 00:05:22.935 EAL: Detected shared linkage of DPDK 00:05:22.935 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.936 EAL: Selected IOVA mode 'PA' 00:05:22.936 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.936 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:22.936 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:23.194 Starting DPDK initialization... 00:05:23.194 Starting SPDK post initialization... 00:05:23.194 SPDK NVMe probe 00:05:23.194 Attaching to 0000:00:10.0 00:05:23.194 Attaching to 0000:00:11.0 00:05:23.194 Attached to 0000:00:10.0 00:05:23.194 Attached to 0000:00:11.0 00:05:23.194 Cleaning up... 00:05:23.194 00:05:23.194 real 0m0.253s 00:05:23.194 user 0m0.077s 00:05:23.194 sys 0m0.078s 00:05:23.194 04:51:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.194 04:51:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.194 ************************************ 00:05:23.194 END TEST env_dpdk_post_init 00:05:23.194 ************************************ 00:05:23.194 04:51:39 env -- env/env.sh@26 -- # uname 00:05:23.194 04:51:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.194 04:51:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.194 04:51:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.194 04:51:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.194 04:51:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.194 ************************************ 00:05:23.194 START TEST env_mem_callbacks 00:05:23.194 ************************************ 00:05:23.194 04:51:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.194 EAL: Detected CPU lcores: 10 00:05:23.194 EAL: Detected NUMA nodes: 1 00:05:23.194 EAL: Detected shared linkage of DPDK 00:05:23.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.194 EAL: Selected IOVA mode 'PA' 00:05:23.454 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.454 00:05:23.454 00:05:23.454 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.454 http://cunit.sourceforge.net/ 00:05:23.454 00:05:23.454 00:05:23.454 Suite: memory 00:05:23.454 Test: test ... 00:05:23.454 register 0x200000200000 2097152 00:05:23.454 malloc 3145728 00:05:23.454 register 0x200000400000 4194304 00:05:23.454 buf 0x200000500000 len 3145728 PASSED 00:05:23.454 malloc 64 00:05:23.454 buf 0x2000004fff40 len 64 PASSED 00:05:23.454 malloc 4194304 00:05:23.454 register 0x200000800000 6291456 00:05:23.454 buf 0x200000a00000 len 4194304 PASSED 00:05:23.454 free 0x200000500000 3145728 00:05:23.454 free 0x2000004fff40 64 00:05:23.454 unregister 0x200000400000 4194304 PASSED 00:05:23.454 free 0x200000a00000 4194304 00:05:23.454 unregister 0x200000800000 6291456 PASSED 00:05:23.454 malloc 8388608 00:05:23.454 register 0x200000400000 10485760 00:05:23.454 buf 0x200000600000 len 8388608 PASSED 00:05:23.454 free 0x200000600000 8388608 00:05:23.454 unregister 0x200000400000 10485760 PASSED 00:05:23.454 passed 00:05:23.454 00:05:23.454 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.454 suites 1 1 n/a 0 0 00:05:23.454 tests 1 1 1 0 0 00:05:23.454 asserts 15 15 15 0 n/a 00:05:23.454 00:05:23.454 Elapsed time = 0.013 seconds 00:05:23.454 00:05:23.454 real 0m0.206s 00:05:23.454 user 0m0.039s 00:05:23.454 sys 0m0.065s 00:05:23.454 04:51:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.454 04:51:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:23.454 ************************************ 00:05:23.454 END TEST env_mem_callbacks 00:05:23.454 ************************************ 00:05:23.454 00:05:23.454 real 0m3.101s 00:05:23.454 user 0m1.419s 00:05:23.454 sys 0m1.355s 00:05:23.454 04:51:40 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.454 04:51:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.454 ************************************ 00:05:23.454 END TEST env 00:05:23.454 ************************************ 00:05:23.454 04:51:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.454 04:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.454 04:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.454 04:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:23.454 ************************************ 00:05:23.454 START TEST rpc 00:05:23.454 ************************************ 00:05:23.454 04:51:40 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.731 * Looking for test storage... 00:05:23.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.731 04:51:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.731 04:51:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.731 04:51:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.731 04:51:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.731 04:51:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.731 04:51:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.731 04:51:40 rpc -- scripts/common.sh@345 -- # : 1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.731 04:51:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.731 04:51:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.731 04:51:40 rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.731 04:51:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.731 04:51:40 rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.731 04:51:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.731 04:51:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.731 04:51:40 rpc -- scripts/common.sh@368 -- # return 0 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.731 --rc genhtml_branch_coverage=1 00:05:23.731 --rc genhtml_function_coverage=1 00:05:23.731 --rc genhtml_legend=1 00:05:23.731 --rc geninfo_all_blocks=1 00:05:23.731 --rc geninfo_unexecuted_blocks=1 00:05:23.731 00:05:23.731 ' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.731 --rc genhtml_branch_coverage=1 00:05:23.731 --rc genhtml_function_coverage=1 00:05:23.731 --rc genhtml_legend=1 00:05:23.731 --rc geninfo_all_blocks=1 00:05:23.731 --rc geninfo_unexecuted_blocks=1 00:05:23.731 00:05:23.731 ' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.731 --rc genhtml_branch_coverage=1 00:05:23.731 --rc genhtml_function_coverage=1 00:05:23.731 --rc genhtml_legend=1 00:05:23.731 --rc geninfo_all_blocks=1 00:05:23.731 --rc geninfo_unexecuted_blocks=1 00:05:23.731 00:05:23.731 ' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.731 --rc genhtml_branch_coverage=1 00:05:23.731 --rc genhtml_function_coverage=1 00:05:23.731 --rc genhtml_legend=1 00:05:23.731 --rc geninfo_all_blocks=1 00:05:23.731 --rc geninfo_unexecuted_blocks=1 00:05:23.731 00:05:23.731 ' 00:05:23.731 04:51:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69216 00:05:23.731 04:51:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:23.731 04:51:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.731 04:51:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69216 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 69216 ']' 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.731 04:51:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.731 [2024-11-21 04:51:40.426710] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:23.731 [2024-11-21 04:51:40.426885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69216 ] 00:05:23.999 [2024-11-21 04:51:40.601307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.999 [2024-11-21 04:51:40.631757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.999 [2024-11-21 04:51:40.631824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69216' to capture a snapshot of events at runtime. 00:05:23.999 [2024-11-21 04:51:40.631846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.000 [2024-11-21 04:51:40.631862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.000 [2024-11-21 04:51:40.631873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69216 for offline analysis/debug. 00:05:24.000 [2024-11-21 04:51:40.632305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.567 04:51:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.567 04:51:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.567 04:51:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.567 04:51:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.567 04:51:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:24.567 04:51:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:24.567 04:51:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.567 04:51:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.567 04:51:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.567 ************************************ 00:05:24.567 START TEST rpc_integrity 00:05:24.567 ************************************ 00:05:24.567 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:24.826 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.827 { 00:05:24.827 "name": "Malloc0", 00:05:24.827 "aliases": [ 00:05:24.827 "94be4e36-5805-4170-b178-b46c69f5e25a" 00:05:24.827 ], 00:05:24.827 "product_name": "Malloc disk", 00:05:24.827 "block_size": 512, 00:05:24.827 "num_blocks": 16384, 00:05:24.827 "uuid": "94be4e36-5805-4170-b178-b46c69f5e25a", 00:05:24.827 "assigned_rate_limits": { 00:05:24.827 "rw_ios_per_sec": 0, 00:05:24.827 "rw_mbytes_per_sec": 0, 00:05:24.827 "r_mbytes_per_sec": 0, 00:05:24.827 "w_mbytes_per_sec": 0 00:05:24.827 }, 00:05:24.827 "claimed": false, 00:05:24.827 "zoned": false, 00:05:24.827 "supported_io_types": { 00:05:24.827 "read": true, 00:05:24.827 "write": true, 00:05:24.827 "unmap": true, 00:05:24.827 "flush": true, 00:05:24.827 "reset": true, 00:05:24.827 "nvme_admin": false, 00:05:24.827 "nvme_io": false, 00:05:24.827 "nvme_io_md": false, 00:05:24.827 "write_zeroes": true, 00:05:24.827 "zcopy": true, 00:05:24.827 "get_zone_info": false, 00:05:24.827 "zone_management": false, 00:05:24.827 "zone_append": false, 00:05:24.827 "compare": false, 00:05:24.827 "compare_and_write": false, 00:05:24.827 "abort": true, 00:05:24.827 "seek_hole": false, 00:05:24.827 "seek_data": false, 00:05:24.827 "copy": true, 00:05:24.827 "nvme_iov_md": false 00:05:24.827 }, 00:05:24.827 "memory_domains": [ 00:05:24.827 { 00:05:24.827 "dma_device_id": "system", 00:05:24.827 "dma_device_type": 1 00:05:24.827 }, 00:05:24.827 { 00:05:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.827 "dma_device_type": 2 00:05:24.827 } 00:05:24.827 ], 00:05:24.827 "driver_specific": {} 00:05:24.827 } 00:05:24.827 ]' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 [2024-11-21 04:51:41.456351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.827 [2024-11-21 04:51:41.456438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.827 [2024-11-21 04:51:41.456487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:24.827 [2024-11-21 04:51:41.456498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.827 [2024-11-21 04:51:41.459263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.827 [2024-11-21 04:51:41.459325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.827 Passthru0 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.827 { 00:05:24.827 "name": "Malloc0", 00:05:24.827 "aliases": [ 00:05:24.827 "94be4e36-5805-4170-b178-b46c69f5e25a" 00:05:24.827 ], 00:05:24.827 "product_name": "Malloc disk", 00:05:24.827 "block_size": 512, 00:05:24.827 "num_blocks": 16384, 00:05:24.827 "uuid": "94be4e36-5805-4170-b178-b46c69f5e25a", 00:05:24.827 "assigned_rate_limits": { 00:05:24.827 "rw_ios_per_sec": 0, 00:05:24.827 "rw_mbytes_per_sec": 0, 00:05:24.827 "r_mbytes_per_sec": 0, 00:05:24.827 "w_mbytes_per_sec": 0 00:05:24.827 }, 00:05:24.827 "claimed": true, 00:05:24.827 "claim_type": "exclusive_write", 00:05:24.827 "zoned": false, 00:05:24.827 "supported_io_types": { 00:05:24.827 "read": true, 00:05:24.827 "write": true, 00:05:24.827 "unmap": true, 00:05:24.827 "flush": true, 00:05:24.827 "reset": true, 00:05:24.827 "nvme_admin": false, 00:05:24.827 "nvme_io": false, 00:05:24.827 "nvme_io_md": false, 00:05:24.827 "write_zeroes": true, 00:05:24.827 "zcopy": true, 00:05:24.827 "get_zone_info": false, 00:05:24.827 "zone_management": false, 00:05:24.827 "zone_append": false, 00:05:24.827 "compare": false, 00:05:24.827 "compare_and_write": false, 00:05:24.827 "abort": true, 00:05:24.827 "seek_hole": false, 00:05:24.827 "seek_data": false, 00:05:24.827 "copy": true, 00:05:24.827 "nvme_iov_md": false 00:05:24.827 }, 00:05:24.827 "memory_domains": [ 00:05:24.827 { 00:05:24.827 "dma_device_id": "system", 00:05:24.827 "dma_device_type": 1 00:05:24.827 }, 00:05:24.827 { 00:05:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.827 "dma_device_type": 2 00:05:24.827 } 00:05:24.827 ], 00:05:24.827 "driver_specific": {} 00:05:24.827 }, 00:05:24.827 { 00:05:24.827 "name": "Passthru0", 00:05:24.827 "aliases": [ 00:05:24.827 "981f7c9d-7097-54b0-8a05-2e58eb7bef69" 00:05:24.827 ], 00:05:24.827 "product_name": "passthru", 00:05:24.827 "block_size": 512, 00:05:24.827 "num_blocks": 16384, 00:05:24.827 "uuid": "981f7c9d-7097-54b0-8a05-2e58eb7bef69", 00:05:24.827 "assigned_rate_limits": { 00:05:24.827 "rw_ios_per_sec": 0, 00:05:24.827 "rw_mbytes_per_sec": 0, 00:05:24.827 "r_mbytes_per_sec": 0, 00:05:24.827 "w_mbytes_per_sec": 0 00:05:24.827 }, 00:05:24.827 "claimed": false, 00:05:24.827 "zoned": false, 00:05:24.827 "supported_io_types": { 00:05:24.827 "read": true, 00:05:24.827 "write": true, 00:05:24.827 "unmap": true, 00:05:24.827 "flush": true, 00:05:24.827 "reset": true, 00:05:24.827 "nvme_admin": false, 00:05:24.827 "nvme_io": false, 00:05:24.827 "nvme_io_md": false, 00:05:24.827 "write_zeroes": true, 00:05:24.827 "zcopy": true, 00:05:24.827 "get_zone_info": false, 00:05:24.827 "zone_management": false, 00:05:24.827 "zone_append": false, 00:05:24.827 "compare": false, 00:05:24.827 "compare_and_write": false, 00:05:24.827 "abort": true, 00:05:24.827 "seek_hole": false, 00:05:24.827 "seek_data": false, 00:05:24.827 "copy": true, 00:05:24.827 "nvme_iov_md": false 00:05:24.827 }, 00:05:24.827 "memory_domains": [ 00:05:24.827 { 00:05:24.827 "dma_device_id": "system", 00:05:24.827 "dma_device_type": 1 00:05:24.827 }, 00:05:24.827 { 00:05:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.827 "dma_device_type": 2 00:05:24.827 } 00:05:24.827 ], 00:05:24.827 "driver_specific": { 00:05:24.827 "passthru": { 00:05:24.827 "name": "Passthru0", 00:05:24.827 "base_bdev_name": "Malloc0" 00:05:24.827 } 00:05:24.827 } 00:05:24.827 } 00:05:24.827 ]' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.827 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.827 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.087 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.087 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.087 04:51:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.087 00:05:25.087 real 0m0.317s 00:05:25.087 user 0m0.186s 00:05:25.087 sys 0m0.057s 00:05:25.087 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.087 04:51:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 ************************************ 00:05:25.087 END TEST rpc_integrity 00:05:25.087 ************************************ 00:05:25.087 04:51:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.087 04:51:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.087 04:51:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.087 04:51:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 ************************************ 00:05:25.087 START TEST rpc_plugins 00:05:25.087 ************************************ 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:25.087 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.087 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.087 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.087 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.087 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.088 { 00:05:25.088 "name": "Malloc1", 00:05:25.088 "aliases": [ 00:05:25.088 "f123257a-9678-4aca-b1d7-3ca241a63b5f" 00:05:25.088 ], 00:05:25.088 "product_name": "Malloc disk", 00:05:25.088 "block_size": 4096, 00:05:25.088 "num_blocks": 256, 00:05:25.088 "uuid": "f123257a-9678-4aca-b1d7-3ca241a63b5f", 00:05:25.088 "assigned_rate_limits": { 00:05:25.088 "rw_ios_per_sec": 0, 00:05:25.088 "rw_mbytes_per_sec": 0, 00:05:25.088 "r_mbytes_per_sec": 0, 00:05:25.088 "w_mbytes_per_sec": 0 00:05:25.088 }, 00:05:25.088 "claimed": false, 00:05:25.088 "zoned": false, 00:05:25.088 "supported_io_types": { 00:05:25.088 "read": true, 00:05:25.088 "write": true, 00:05:25.088 "unmap": true, 00:05:25.088 "flush": true, 00:05:25.088 "reset": true, 00:05:25.088 "nvme_admin": false, 00:05:25.088 "nvme_io": false, 00:05:25.088 "nvme_io_md": false, 00:05:25.088 "write_zeroes": true, 00:05:25.088 "zcopy": true, 00:05:25.088 "get_zone_info": false, 00:05:25.088 "zone_management": false, 00:05:25.088 "zone_append": false, 00:05:25.088 "compare": false, 00:05:25.088 "compare_and_write": false, 00:05:25.088 "abort": true, 00:05:25.088 "seek_hole": false, 00:05:25.088 "seek_data": false, 00:05:25.088 "copy": true, 00:05:25.088 "nvme_iov_md": false 00:05:25.088 }, 00:05:25.088 "memory_domains": [ 00:05:25.088 { 00:05:25.088 "dma_device_id": "system", 00:05:25.088 "dma_device_type": 1 00:05:25.088 }, 00:05:25.088 { 00:05:25.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.088 "dma_device_type": 2 00:05:25.088 } 00:05:25.088 ], 00:05:25.088 "driver_specific": {} 00:05:25.088 } 00:05:25.088 ]' 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.088 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.088 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:25.347 04:51:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.347 00:05:25.348 real 0m0.170s 00:05:25.348 user 0m0.095s 00:05:25.348 sys 0m0.035s 00:05:25.348 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.348 04:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.348 ************************************ 00:05:25.348 END TEST rpc_plugins 00:05:25.348 ************************************ 00:05:25.348 04:51:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.348 04:51:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.348 04:51:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.348 04:51:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.348 ************************************ 00:05:25.348 START TEST rpc_trace_cmd_test 00:05:25.348 ************************************ 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:25.348 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69216", 00:05:25.348 "tpoint_group_mask": "0x8", 00:05:25.348 "iscsi_conn": { 00:05:25.348 "mask": "0x2", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "scsi": { 00:05:25.348 "mask": "0x4", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "bdev": { 00:05:25.348 "mask": "0x8", 00:05:25.348 "tpoint_mask": "0xffffffffffffffff" 00:05:25.348 }, 00:05:25.348 "nvmf_rdma": { 00:05:25.348 "mask": "0x10", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "nvmf_tcp": { 00:05:25.348 "mask": "0x20", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "ftl": { 00:05:25.348 "mask": "0x40", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "blobfs": { 00:05:25.348 "mask": "0x80", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "dsa": { 00:05:25.348 "mask": "0x200", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "thread": { 00:05:25.348 "mask": "0x400", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "nvme_pcie": { 00:05:25.348 "mask": "0x800", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "iaa": { 00:05:25.348 "mask": "0x1000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "nvme_tcp": { 00:05:25.348 "mask": "0x2000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "bdev_nvme": { 00:05:25.348 "mask": "0x4000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "sock": { 00:05:25.348 "mask": "0x8000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "blob": { 00:05:25.348 "mask": "0x10000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "bdev_raid": { 00:05:25.348 "mask": "0x20000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 }, 00:05:25.348 "scheduler": { 00:05:25.348 "mask": "0x40000", 00:05:25.348 "tpoint_mask": "0x0" 00:05:25.348 } 00:05:25.348 }' 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:25.348 04:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.348 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.348 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:25.607 00:05:25.607 real 0m0.270s 00:05:25.607 user 0m0.215s 00:05:25.607 sys 0m0.045s 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.607 04:51:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.607 ************************************ 00:05:25.607 END TEST rpc_trace_cmd_test 00:05:25.607 ************************************ 00:05:25.607 04:51:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.607 04:51:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.607 04:51:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.607 04:51:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.607 04:51:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.607 04:51:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.607 ************************************ 00:05:25.607 START TEST rpc_daemon_integrity 00:05:25.607 ************************************ 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.607 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.866 { 00:05:25.866 "name": "Malloc2", 00:05:25.866 "aliases": [ 00:05:25.866 "0fcf41d0-cd10-4ba5-8499-dfe524c39584" 00:05:25.866 ], 00:05:25.866 "product_name": "Malloc disk", 00:05:25.866 "block_size": 512, 00:05:25.866 "num_blocks": 16384, 00:05:25.866 "uuid": "0fcf41d0-cd10-4ba5-8499-dfe524c39584", 00:05:25.866 "assigned_rate_limits": { 00:05:25.866 "rw_ios_per_sec": 0, 00:05:25.866 "rw_mbytes_per_sec": 0, 00:05:25.866 "r_mbytes_per_sec": 0, 00:05:25.866 "w_mbytes_per_sec": 0 00:05:25.866 }, 00:05:25.866 "claimed": false, 00:05:25.866 "zoned": false, 00:05:25.866 "supported_io_types": { 00:05:25.866 "read": true, 00:05:25.866 "write": true, 00:05:25.866 "unmap": true, 00:05:25.866 "flush": true, 00:05:25.866 "reset": true, 00:05:25.866 "nvme_admin": false, 00:05:25.866 "nvme_io": false, 00:05:25.866 "nvme_io_md": false, 00:05:25.866 "write_zeroes": true, 00:05:25.866 "zcopy": true, 00:05:25.866 "get_zone_info": false, 00:05:25.866 "zone_management": false, 00:05:25.866 "zone_append": false, 00:05:25.866 "compare": false, 00:05:25.866 "compare_and_write": false, 00:05:25.866 "abort": true, 00:05:25.866 "seek_hole": false, 00:05:25.866 "seek_data": false, 00:05:25.866 "copy": true, 00:05:25.866 "nvme_iov_md": false 00:05:25.866 }, 00:05:25.866 "memory_domains": [ 00:05:25.866 { 00:05:25.866 "dma_device_id": "system", 00:05:25.866 "dma_device_type": 1 00:05:25.866 }, 00:05:25.866 { 00:05:25.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.866 "dma_device_type": 2 00:05:25.866 } 00:05:25.866 ], 00:05:25.866 "driver_specific": {} 00:05:25.866 } 00:05:25.866 ]' 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 [2024-11-21 04:51:42.391787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:25.866 [2024-11-21 04:51:42.391872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.866 [2024-11-21 04:51:42.391906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:25.866 [2024-11-21 04:51:42.391917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.866 [2024-11-21 04:51:42.394635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.866 [2024-11-21 04:51:42.394692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.866 Passthru0 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.866 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.866 { 00:05:25.866 "name": "Malloc2", 00:05:25.866 "aliases": [ 00:05:25.866 "0fcf41d0-cd10-4ba5-8499-dfe524c39584" 00:05:25.866 ], 00:05:25.866 "product_name": "Malloc disk", 00:05:25.866 "block_size": 512, 00:05:25.866 "num_blocks": 16384, 00:05:25.866 "uuid": "0fcf41d0-cd10-4ba5-8499-dfe524c39584", 00:05:25.866 "assigned_rate_limits": { 00:05:25.866 "rw_ios_per_sec": 0, 00:05:25.866 "rw_mbytes_per_sec": 0, 00:05:25.866 "r_mbytes_per_sec": 0, 00:05:25.866 "w_mbytes_per_sec": 0 00:05:25.866 }, 00:05:25.866 "claimed": true, 00:05:25.866 "claim_type": "exclusive_write", 00:05:25.866 "zoned": false, 00:05:25.866 "supported_io_types": { 00:05:25.866 "read": true, 00:05:25.866 "write": true, 00:05:25.866 "unmap": true, 00:05:25.866 "flush": true, 00:05:25.866 "reset": true, 00:05:25.866 "nvme_admin": false, 00:05:25.866 "nvme_io": false, 00:05:25.866 "nvme_io_md": false, 00:05:25.866 "write_zeroes": true, 00:05:25.866 "zcopy": true, 00:05:25.866 "get_zone_info": false, 00:05:25.866 "zone_management": false, 00:05:25.866 "zone_append": false, 00:05:25.866 "compare": false, 00:05:25.866 "compare_and_write": false, 00:05:25.866 "abort": true, 00:05:25.866 "seek_hole": false, 00:05:25.866 "seek_data": false, 00:05:25.866 "copy": true, 00:05:25.866 "nvme_iov_md": false 00:05:25.866 }, 00:05:25.866 "memory_domains": [ 00:05:25.866 { 00:05:25.866 "dma_device_id": "system", 00:05:25.866 "dma_device_type": 1 00:05:25.866 }, 00:05:25.866 { 00:05:25.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.866 "dma_device_type": 2 00:05:25.866 } 00:05:25.866 ], 00:05:25.866 "driver_specific": {} 00:05:25.866 }, 00:05:25.866 { 00:05:25.866 "name": "Passthru0", 00:05:25.866 "aliases": [ 00:05:25.866 "0de9d662-c3a0-5499-b593-1ed01ff5915b" 00:05:25.866 ], 00:05:25.866 "product_name": "passthru", 00:05:25.866 "block_size": 512, 00:05:25.866 "num_blocks": 16384, 00:05:25.866 "uuid": "0de9d662-c3a0-5499-b593-1ed01ff5915b", 00:05:25.867 "assigned_rate_limits": { 00:05:25.867 "rw_ios_per_sec": 0, 00:05:25.867 "rw_mbytes_per_sec": 0, 00:05:25.867 "r_mbytes_per_sec": 0, 00:05:25.867 "w_mbytes_per_sec": 0 00:05:25.867 }, 00:05:25.867 "claimed": false, 00:05:25.867 "zoned": false, 00:05:25.867 "supported_io_types": { 00:05:25.867 "read": true, 00:05:25.867 "write": true, 00:05:25.867 "unmap": true, 00:05:25.867 "flush": true, 00:05:25.867 "reset": true, 00:05:25.867 "nvme_admin": false, 00:05:25.867 "nvme_io": false, 00:05:25.867 "nvme_io_md": false, 00:05:25.867 "write_zeroes": true, 00:05:25.867 "zcopy": true, 00:05:25.867 "get_zone_info": false, 00:05:25.867 "zone_management": false, 00:05:25.867 "zone_append": false, 00:05:25.867 "compare": false, 00:05:25.867 "compare_and_write": false, 00:05:25.867 "abort": true, 00:05:25.867 "seek_hole": false, 00:05:25.867 "seek_data": false, 00:05:25.867 "copy": true, 00:05:25.867 "nvme_iov_md": false 00:05:25.867 }, 00:05:25.867 "memory_domains": [ 00:05:25.867 { 00:05:25.867 "dma_device_id": "system", 00:05:25.867 "dma_device_type": 1 00:05:25.867 }, 00:05:25.867 { 00:05:25.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.867 "dma_device_type": 2 00:05:25.867 } 00:05:25.867 ], 00:05:25.867 "driver_specific": { 00:05:25.867 "passthru": { 00:05:25.867 "name": "Passthru0", 00:05:25.867 "base_bdev_name": "Malloc2" 00:05:25.867 } 00:05:25.867 } 00:05:25.867 } 00:05:25.867 ]' 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.867 00:05:25.867 real 0m0.320s 00:05:25.867 user 0m0.196s 00:05:25.867 sys 0m0.056s 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.867 04:51:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.867 ************************************ 00:05:25.867 END TEST rpc_daemon_integrity 00:05:25.867 ************************************ 00:05:26.126 04:51:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.126 04:51:42 rpc -- rpc/rpc.sh@84 -- # killprocess 69216 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@954 -- # '[' -z 69216 ']' 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@958 -- # kill -0 69216 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@959 -- # uname 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69216 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.126 killing process with pid 69216 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69216' 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@973 -- # kill 69216 00:05:26.126 04:51:42 rpc -- common/autotest_common.sh@978 -- # wait 69216 00:05:26.385 00:05:26.385 real 0m2.932s 00:05:26.385 user 0m3.540s 00:05:26.385 sys 0m0.910s 00:05:26.385 04:51:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.385 04:51:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.385 ************************************ 00:05:26.385 END TEST rpc 00:05:26.385 ************************************ 00:05:26.385 04:51:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.385 04:51:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.385 04:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.385 04:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:26.385 ************************************ 00:05:26.385 START TEST skip_rpc 00:05:26.385 ************************************ 00:05:26.385 04:51:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.644 * Looking for test storage... 00:05:26.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.644 04:51:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.644 --rc genhtml_branch_coverage=1 00:05:26.644 --rc genhtml_function_coverage=1 00:05:26.644 --rc genhtml_legend=1 00:05:26.644 --rc geninfo_all_blocks=1 00:05:26.644 --rc geninfo_unexecuted_blocks=1 00:05:26.644 00:05:26.644 ' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.644 --rc genhtml_branch_coverage=1 00:05:26.644 --rc genhtml_function_coverage=1 00:05:26.644 --rc genhtml_legend=1 00:05:26.644 --rc geninfo_all_blocks=1 00:05:26.644 --rc geninfo_unexecuted_blocks=1 00:05:26.644 00:05:26.644 ' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.644 --rc genhtml_branch_coverage=1 00:05:26.644 --rc genhtml_function_coverage=1 00:05:26.644 --rc genhtml_legend=1 00:05:26.644 --rc geninfo_all_blocks=1 00:05:26.644 --rc geninfo_unexecuted_blocks=1 00:05:26.644 00:05:26.644 ' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.644 --rc genhtml_branch_coverage=1 00:05:26.644 --rc genhtml_function_coverage=1 00:05:26.644 --rc genhtml_legend=1 00:05:26.644 --rc geninfo_all_blocks=1 00:05:26.644 --rc geninfo_unexecuted_blocks=1 00:05:26.644 00:05:26.644 ' 00:05:26.644 04:51:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.644 04:51:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.644 04:51:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.644 04:51:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.644 ************************************ 00:05:26.644 START TEST skip_rpc 00:05:26.644 ************************************ 00:05:26.644 04:51:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:26.644 04:51:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69423 00:05:26.644 04:51:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.644 04:51:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.644 04:51:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.904 [2024-11-21 04:51:43.438154] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:26.904 [2024-11-21 04:51:43.438317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69423 ] 00:05:26.904 [2024-11-21 04:51:43.612521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.163 [2024-11-21 04:51:43.643372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69423 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69423 ']' 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69423 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69423 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.434 killing process with pid 69423 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69423' 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69423 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69423 00:05:32.434 00:05:32.434 real 0m5.426s 00:05:32.434 user 0m5.007s 00:05:32.434 sys 0m0.345s 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.434 04:51:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.434 ************************************ 00:05:32.434 END TEST skip_rpc 00:05:32.434 ************************************ 00:05:32.434 04:51:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.434 04:51:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.434 04:51:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.434 04:51:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.434 ************************************ 00:05:32.434 START TEST skip_rpc_with_json 00:05:32.434 ************************************ 00:05:32.434 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69510 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69510 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69510 ']' 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.435 04:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.435 [2024-11-21 04:51:48.925212] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:32.435 [2024-11-21 04:51:48.925365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69510 ] 00:05:32.435 [2024-11-21 04:51:49.095679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.435 [2024-11-21 04:51:49.124949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.369 [2024-11-21 04:51:49.764064] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.369 request: 00:05:33.369 { 00:05:33.369 "trtype": "tcp", 00:05:33.369 "method": "nvmf_get_transports", 00:05:33.369 "req_id": 1 00:05:33.369 } 00:05:33.369 Got JSON-RPC error response 00:05:33.369 response: 00:05:33.369 { 00:05:33.369 "code": -19, 00:05:33.369 "message": "No such device" 00:05:33.369 } 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.369 [2024-11-21 04:51:49.776166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.369 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.369 { 00:05:33.369 "subsystems": [ 00:05:33.369 { 00:05:33.369 "subsystem": "fsdev", 00:05:33.369 "config": [ 00:05:33.369 { 00:05:33.369 "method": "fsdev_set_opts", 00:05:33.369 "params": { 00:05:33.369 "fsdev_io_pool_size": 65535, 00:05:33.369 "fsdev_io_cache_size": 256 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "keyring", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "iobuf", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "iobuf_set_options", 00:05:33.370 "params": { 00:05:33.370 "small_pool_count": 8192, 00:05:33.370 "large_pool_count": 1024, 00:05:33.370 "small_bufsize": 8192, 00:05:33.370 "large_bufsize": 135168, 00:05:33.370 "enable_numa": false 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "sock", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "sock_set_default_impl", 00:05:33.370 "params": { 00:05:33.370 "impl_name": "posix" 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "sock_impl_set_options", 00:05:33.370 "params": { 00:05:33.370 "impl_name": "ssl", 00:05:33.370 "recv_buf_size": 4096, 00:05:33.370 "send_buf_size": 4096, 00:05:33.370 "enable_recv_pipe": true, 00:05:33.370 "enable_quickack": false, 00:05:33.370 "enable_placement_id": 0, 00:05:33.370 "enable_zerocopy_send_server": true, 00:05:33.370 "enable_zerocopy_send_client": false, 00:05:33.370 "zerocopy_threshold": 0, 00:05:33.370 "tls_version": 0, 00:05:33.370 "enable_ktls": false 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "sock_impl_set_options", 00:05:33.370 "params": { 00:05:33.370 "impl_name": "posix", 00:05:33.370 "recv_buf_size": 2097152, 00:05:33.370 "send_buf_size": 2097152, 00:05:33.370 "enable_recv_pipe": true, 00:05:33.370 "enable_quickack": false, 00:05:33.370 "enable_placement_id": 0, 00:05:33.370 "enable_zerocopy_send_server": true, 00:05:33.370 "enable_zerocopy_send_client": false, 00:05:33.370 "zerocopy_threshold": 0, 00:05:33.370 "tls_version": 0, 00:05:33.370 "enable_ktls": false 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "vmd", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "accel", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "accel_set_options", 00:05:33.370 "params": { 00:05:33.370 "small_cache_size": 128, 00:05:33.370 "large_cache_size": 16, 00:05:33.370 "task_count": 2048, 00:05:33.370 "sequence_count": 2048, 00:05:33.370 "buf_count": 2048 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "bdev", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "bdev_set_options", 00:05:33.370 "params": { 00:05:33.370 "bdev_io_pool_size": 65535, 00:05:33.370 "bdev_io_cache_size": 256, 00:05:33.370 "bdev_auto_examine": true, 00:05:33.370 "iobuf_small_cache_size": 128, 00:05:33.370 "iobuf_large_cache_size": 16 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "bdev_raid_set_options", 00:05:33.370 "params": { 00:05:33.370 "process_window_size_kb": 1024, 00:05:33.370 "process_max_bandwidth_mb_sec": 0 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "bdev_iscsi_set_options", 00:05:33.370 "params": { 00:05:33.370 "timeout_sec": 30 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "bdev_nvme_set_options", 00:05:33.370 "params": { 00:05:33.370 "action_on_timeout": "none", 00:05:33.370 "timeout_us": 0, 00:05:33.370 "timeout_admin_us": 0, 00:05:33.370 "keep_alive_timeout_ms": 10000, 00:05:33.370 "arbitration_burst": 0, 00:05:33.370 "low_priority_weight": 0, 00:05:33.370 "medium_priority_weight": 0, 00:05:33.370 "high_priority_weight": 0, 00:05:33.370 "nvme_adminq_poll_period_us": 10000, 00:05:33.370 "nvme_ioq_poll_period_us": 0, 00:05:33.370 "io_queue_requests": 0, 00:05:33.370 "delay_cmd_submit": true, 00:05:33.370 "transport_retry_count": 4, 00:05:33.370 "bdev_retry_count": 3, 00:05:33.370 "transport_ack_timeout": 0, 00:05:33.370 "ctrlr_loss_timeout_sec": 0, 00:05:33.370 "reconnect_delay_sec": 0, 00:05:33.370 "fast_io_fail_timeout_sec": 0, 00:05:33.370 "disable_auto_failback": false, 00:05:33.370 "generate_uuids": false, 00:05:33.370 "transport_tos": 0, 00:05:33.370 "nvme_error_stat": false, 00:05:33.370 "rdma_srq_size": 0, 00:05:33.370 "io_path_stat": false, 00:05:33.370 "allow_accel_sequence": false, 00:05:33.370 "rdma_max_cq_size": 0, 00:05:33.370 "rdma_cm_event_timeout_ms": 0, 00:05:33.370 "dhchap_digests": [ 00:05:33.370 "sha256", 00:05:33.370 "sha384", 00:05:33.370 "sha512" 00:05:33.370 ], 00:05:33.370 "dhchap_dhgroups": [ 00:05:33.370 "null", 00:05:33.370 "ffdhe2048", 00:05:33.370 "ffdhe3072", 00:05:33.370 "ffdhe4096", 00:05:33.370 "ffdhe6144", 00:05:33.370 "ffdhe8192" 00:05:33.370 ] 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "bdev_nvme_set_hotplug", 00:05:33.370 "params": { 00:05:33.370 "period_us": 100000, 00:05:33.370 "enable": false 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "bdev_wait_for_examine" 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "scsi", 00:05:33.370 "config": null 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "scheduler", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "framework_set_scheduler", 00:05:33.370 "params": { 00:05:33.370 "name": "static" 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "vhost_scsi", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "vhost_blk", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "ublk", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "nbd", 00:05:33.370 "config": [] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "nvmf", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "nvmf_set_config", 00:05:33.370 "params": { 00:05:33.370 "discovery_filter": "match_any", 00:05:33.370 "admin_cmd_passthru": { 00:05:33.370 "identify_ctrlr": false 00:05:33.370 }, 00:05:33.370 "dhchap_digests": [ 00:05:33.370 "sha256", 00:05:33.370 "sha384", 00:05:33.370 "sha512" 00:05:33.370 ], 00:05:33.370 "dhchap_dhgroups": [ 00:05:33.370 "null", 00:05:33.370 "ffdhe2048", 00:05:33.370 "ffdhe3072", 00:05:33.370 "ffdhe4096", 00:05:33.370 "ffdhe6144", 00:05:33.370 "ffdhe8192" 00:05:33.370 ] 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "nvmf_set_max_subsystems", 00:05:33.370 "params": { 00:05:33.370 "max_subsystems": 1024 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "nvmf_set_crdt", 00:05:33.370 "params": { 00:05:33.370 "crdt1": 0, 00:05:33.370 "crdt2": 0, 00:05:33.370 "crdt3": 0 00:05:33.370 } 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "method": "nvmf_create_transport", 00:05:33.370 "params": { 00:05:33.370 "trtype": "TCP", 00:05:33.370 "max_queue_depth": 128, 00:05:33.370 "max_io_qpairs_per_ctrlr": 127, 00:05:33.370 "in_capsule_data_size": 4096, 00:05:33.370 "max_io_size": 131072, 00:05:33.370 "io_unit_size": 131072, 00:05:33.370 "max_aq_depth": 128, 00:05:33.370 "num_shared_buffers": 511, 00:05:33.370 "buf_cache_size": 4294967295, 00:05:33.370 "dif_insert_or_strip": false, 00:05:33.370 "zcopy": false, 00:05:33.370 "c2h_success": true, 00:05:33.370 "sock_priority": 0, 00:05:33.370 "abort_timeout_sec": 1, 00:05:33.370 "ack_timeout": 0, 00:05:33.370 "data_wr_pool_size": 0 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 }, 00:05:33.370 { 00:05:33.370 "subsystem": "iscsi", 00:05:33.370 "config": [ 00:05:33.370 { 00:05:33.370 "method": "iscsi_set_options", 00:05:33.370 "params": { 00:05:33.370 "node_base": "iqn.2016-06.io.spdk", 00:05:33.370 "max_sessions": 128, 00:05:33.370 "max_connections_per_session": 2, 00:05:33.370 "max_queue_depth": 64, 00:05:33.370 "default_time2wait": 2, 00:05:33.370 "default_time2retain": 20, 00:05:33.370 "first_burst_length": 8192, 00:05:33.370 "immediate_data": true, 00:05:33.370 "allow_duplicated_isid": false, 00:05:33.370 "error_recovery_level": 0, 00:05:33.370 "nop_timeout": 60, 00:05:33.370 "nop_in_interval": 30, 00:05:33.370 "disable_chap": false, 00:05:33.370 "require_chap": false, 00:05:33.370 "mutual_chap": false, 00:05:33.370 "chap_group": 0, 00:05:33.370 "max_large_datain_per_connection": 64, 00:05:33.370 "max_r2t_per_connection": 4, 00:05:33.370 "pdu_pool_size": 36864, 00:05:33.370 "immediate_data_pool_size": 16384, 00:05:33.370 "data_out_pool_size": 2048 00:05:33.370 } 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 } 00:05:33.370 ] 00:05:33.370 } 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69510 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69510 ']' 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69510 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69510 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.371 killing process with pid 69510 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69510' 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69510 00:05:33.371 04:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69510 00:05:33.939 04:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69539 00:05:33.939 04:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.939 04:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69539 ']' 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.226 killing process with pid 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69539' 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69539 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.226 00:05:39.226 real 0m6.948s 00:05:39.226 user 0m6.528s 00:05:39.226 sys 0m0.718s 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.226 ************************************ 00:05:39.226 END TEST skip_rpc_with_json 00:05:39.226 ************************************ 00:05:39.226 04:51:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.226 04:51:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.226 04:51:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.226 04:51:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.226 ************************************ 00:05:39.226 START TEST skip_rpc_with_delay 00:05:39.226 ************************************ 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.226 04:51:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.226 [2024-11-21 04:51:55.938235] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.485 00:05:39.485 real 0m0.167s 00:05:39.485 user 0m0.091s 00:05:39.485 sys 0m0.075s 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.485 04:51:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.485 ************************************ 00:05:39.485 END TEST skip_rpc_with_delay 00:05:39.485 ************************************ 00:05:39.485 04:51:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.485 04:51:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.485 04:51:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.485 04:51:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.485 04:51:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.485 04:51:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.485 ************************************ 00:05:39.485 START TEST exit_on_failed_rpc_init 00:05:39.485 ************************************ 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69645 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69645 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69645 ']' 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.485 04:51:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.485 [2024-11-21 04:51:56.198969] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:39.485 [2024-11-21 04:51:56.199119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69645 ] 00:05:39.745 [2024-11-21 04:51:56.371348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.745 [2024-11-21 04:51:56.401337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.682 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.683 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.683 [2024-11-21 04:51:57.172536] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:40.683 [2024-11-21 04:51:57.172704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69663 ] 00:05:40.683 [2024-11-21 04:51:57.332001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.683 [2024-11-21 04:51:57.396714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.683 [2024-11-21 04:51:57.396911] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.683 [2024-11-21 04:51:57.396962] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.683 [2024-11-21 04:51:57.396995] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69645 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69645 ']' 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69645 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69645 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69645' 00:05:40.942 killing process with pid 69645 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69645 00:05:40.942 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69645 00:05:41.510 00:05:41.510 real 0m1.861s 00:05:41.510 user 0m2.084s 00:05:41.510 sys 0m0.534s 00:05:41.510 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.510 04:51:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.510 ************************************ 00:05:41.510 END TEST exit_on_failed_rpc_init 00:05:41.510 ************************************ 00:05:41.510 04:51:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.510 00:05:41.510 real 0m14.900s 00:05:41.510 user 0m13.912s 00:05:41.510 sys 0m1.989s 00:05:41.510 04:51:57 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.510 04:51:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.510 ************************************ 00:05:41.510 END TEST skip_rpc 00:05:41.510 ************************************ 00:05:41.510 04:51:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.510 04:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.510 04:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.510 04:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:41.510 ************************************ 00:05:41.510 START TEST rpc_client 00:05:41.510 ************************************ 00:05:41.510 04:51:58 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.510 * Looking for test storage... 00:05:41.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:41.510 04:51:58 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.510 04:51:58 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.510 04:51:58 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.770 04:51:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.770 --rc genhtml_branch_coverage=1 00:05:41.770 --rc genhtml_function_coverage=1 00:05:41.770 --rc genhtml_legend=1 00:05:41.770 --rc geninfo_all_blocks=1 00:05:41.770 --rc geninfo_unexecuted_blocks=1 00:05:41.770 00:05:41.770 ' 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.770 --rc genhtml_branch_coverage=1 00:05:41.770 --rc genhtml_function_coverage=1 00:05:41.770 --rc genhtml_legend=1 00:05:41.770 --rc geninfo_all_blocks=1 00:05:41.770 --rc geninfo_unexecuted_blocks=1 00:05:41.770 00:05:41.770 ' 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.770 --rc genhtml_branch_coverage=1 00:05:41.770 --rc genhtml_function_coverage=1 00:05:41.770 --rc genhtml_legend=1 00:05:41.770 --rc geninfo_all_blocks=1 00:05:41.770 --rc geninfo_unexecuted_blocks=1 00:05:41.770 00:05:41.770 ' 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.770 --rc genhtml_branch_coverage=1 00:05:41.770 --rc genhtml_function_coverage=1 00:05:41.770 --rc genhtml_legend=1 00:05:41.770 --rc geninfo_all_blocks=1 00:05:41.770 --rc geninfo_unexecuted_blocks=1 00:05:41.770 00:05:41.770 ' 00:05:41.770 04:51:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:41.770 OK 00:05:41.770 04:51:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.770 00:05:41.770 real 0m0.293s 00:05:41.770 user 0m0.168s 00:05:41.770 sys 0m0.138s 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.770 04:51:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.770 ************************************ 00:05:41.770 END TEST rpc_client 00:05:41.770 ************************************ 00:05:41.770 04:51:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.770 04:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.770 04:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.770 04:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:41.770 ************************************ 00:05:41.770 START TEST json_config 00:05:41.770 ************************************ 00:05:41.770 04:51:58 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:42.030 04:51:58 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.030 04:51:58 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.030 04:51:58 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.030 04:51:58 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.030 04:51:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.030 04:51:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.030 04:51:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.030 04:51:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.030 04:51:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.030 04:51:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.030 04:51:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.030 04:51:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.030 04:51:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.030 04:51:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.030 04:51:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.030 04:51:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:42.030 04:51:58 json_config -- scripts/common.sh@345 -- # : 1 00:05:42.030 04:51:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.030 04:51:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.030 04:51:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:42.031 04:51:58 json_config -- scripts/common.sh@353 -- # local d=1 00:05:42.031 04:51:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.031 04:51:58 json_config -- scripts/common.sh@355 -- # echo 1 00:05:42.031 04:51:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.031 04:51:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:42.031 04:51:58 json_config -- scripts/common.sh@353 -- # local d=2 00:05:42.031 04:51:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.031 04:51:58 json_config -- scripts/common.sh@355 -- # echo 2 00:05:42.031 04:51:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.031 04:51:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.031 04:51:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.031 04:51:58 json_config -- scripts/common.sh@368 -- # return 0 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.031 --rc genhtml_branch_coverage=1 00:05:42.031 --rc genhtml_function_coverage=1 00:05:42.031 --rc genhtml_legend=1 00:05:42.031 --rc geninfo_all_blocks=1 00:05:42.031 --rc geninfo_unexecuted_blocks=1 00:05:42.031 00:05:42.031 ' 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.031 --rc genhtml_branch_coverage=1 00:05:42.031 --rc genhtml_function_coverage=1 00:05:42.031 --rc genhtml_legend=1 00:05:42.031 --rc geninfo_all_blocks=1 00:05:42.031 --rc geninfo_unexecuted_blocks=1 00:05:42.031 00:05:42.031 ' 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.031 --rc genhtml_branch_coverage=1 00:05:42.031 --rc genhtml_function_coverage=1 00:05:42.031 --rc genhtml_legend=1 00:05:42.031 --rc geninfo_all_blocks=1 00:05:42.031 --rc geninfo_unexecuted_blocks=1 00:05:42.031 00:05:42.031 ' 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.031 --rc genhtml_branch_coverage=1 00:05:42.031 --rc genhtml_function_coverage=1 00:05:42.031 --rc genhtml_legend=1 00:05:42.031 --rc geninfo_all_blocks=1 00:05:42.031 --rc geninfo_unexecuted_blocks=1 00:05:42.031 00:05:42.031 ' 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.031 04:51:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.031 04:51:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.031 04:51:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.031 04:51:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.031 04:51:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.031 04:51:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.031 04:51:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.031 04:51:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:42.031 04:51:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@51 -- # : 0 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.031 04:51:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:42.031 WARNING: No tests are enabled so not running JSON configuration tests 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:42.031 04:51:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:42.031 ************************************ 00:05:42.031 END TEST json_config 00:05:42.031 ************************************ 00:05:42.031 00:05:42.031 real 0m0.254s 00:05:42.031 user 0m0.150s 00:05:42.031 sys 0m0.114s 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.031 04:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.031 04:51:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:42.031 04:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.031 04:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.031 04:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:42.031 ************************************ 00:05:42.031 START TEST json_config_extra_key 00:05:42.031 ************************************ 00:05:42.031 04:51:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.292 --rc genhtml_branch_coverage=1 00:05:42.292 --rc genhtml_function_coverage=1 00:05:42.292 --rc genhtml_legend=1 00:05:42.292 --rc geninfo_all_blocks=1 00:05:42.292 --rc geninfo_unexecuted_blocks=1 00:05:42.292 00:05:42.292 ' 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.292 --rc genhtml_branch_coverage=1 00:05:42.292 --rc genhtml_function_coverage=1 00:05:42.292 --rc genhtml_legend=1 00:05:42.292 --rc geninfo_all_blocks=1 00:05:42.292 --rc geninfo_unexecuted_blocks=1 00:05:42.292 00:05:42.292 ' 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.292 --rc genhtml_branch_coverage=1 00:05:42.292 --rc genhtml_function_coverage=1 00:05:42.292 --rc genhtml_legend=1 00:05:42.292 --rc geninfo_all_blocks=1 00:05:42.292 --rc geninfo_unexecuted_blocks=1 00:05:42.292 00:05:42.292 ' 00:05:42.292 04:51:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.292 --rc genhtml_branch_coverage=1 00:05:42.292 --rc genhtml_function_coverage=1 00:05:42.292 --rc genhtml_legend=1 00:05:42.292 --rc geninfo_all_blocks=1 00:05:42.292 --rc geninfo_unexecuted_blocks=1 00:05:42.292 00:05:42.292 ' 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c25ce9c2-d5ba-4cb7-beaf-bef433e902a6 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.292 04:51:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.292 04:51:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.292 04:51:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.292 04:51:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.292 04:51:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:42.292 04:51:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.292 04:51:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:42.292 INFO: launching applications... 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:42.292 04:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:42.292 04:51:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:42.292 04:51:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69852 00:05:42.293 Waiting for target to run... 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69852 /var/tmp/spdk_tgt.sock 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69852 ']' 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.293 04:51:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:42.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.293 04:51:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.552 [2024-11-21 04:51:59.055032] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:42.552 [2024-11-21 04:51:59.055200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69852 ] 00:05:43.121 [2024-11-21 04:51:59.633315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.121 [2024-11-21 04:51:59.652526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.381 04:51:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.381 04:51:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:43.381 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:43.381 INFO: shutting down applications... 00:05:43.381 04:51:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:43.381 04:51:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69852 ]] 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69852 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69852 00:05:43.381 04:51:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69852 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.641 SPDK target shutdown done 00:05:43.641 04:52:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.641 Success 00:05:43.641 04:52:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.641 ************************************ 00:05:43.641 END TEST json_config_extra_key 00:05:43.641 ************************************ 00:05:43.641 00:05:43.641 real 0m1.645s 00:05:43.641 user 0m1.167s 00:05:43.641 sys 0m0.661s 00:05:43.641 04:52:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.641 04:52:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.899 04:52:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.899 04:52:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.899 04:52:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.899 04:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:43.899 ************************************ 00:05:43.899 START TEST alias_rpc 00:05:43.899 ************************************ 00:05:43.899 04:52:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.899 * Looking for test storage... 00:05:43.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:43.899 04:52:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.899 04:52:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.899 04:52:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.159 04:52:00 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.159 04:52:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.159 04:52:00 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.159 04:52:00 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.159 --rc genhtml_branch_coverage=1 00:05:44.159 --rc genhtml_function_coverage=1 00:05:44.159 --rc genhtml_legend=1 00:05:44.159 --rc geninfo_all_blocks=1 00:05:44.159 --rc geninfo_unexecuted_blocks=1 00:05:44.159 00:05:44.159 ' 00:05:44.159 04:52:00 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.159 --rc genhtml_branch_coverage=1 00:05:44.159 --rc genhtml_function_coverage=1 00:05:44.159 --rc genhtml_legend=1 00:05:44.159 --rc geninfo_all_blocks=1 00:05:44.159 --rc geninfo_unexecuted_blocks=1 00:05:44.159 00:05:44.159 ' 00:05:44.159 04:52:00 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.160 --rc genhtml_branch_coverage=1 00:05:44.160 --rc genhtml_function_coverage=1 00:05:44.160 --rc genhtml_legend=1 00:05:44.160 --rc geninfo_all_blocks=1 00:05:44.160 --rc geninfo_unexecuted_blocks=1 00:05:44.160 00:05:44.160 ' 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.160 --rc genhtml_branch_coverage=1 00:05:44.160 --rc genhtml_function_coverage=1 00:05:44.160 --rc genhtml_legend=1 00:05:44.160 --rc geninfo_all_blocks=1 00:05:44.160 --rc geninfo_unexecuted_blocks=1 00:05:44.160 00:05:44.160 ' 00:05:44.160 04:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.160 04:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69931 00:05:44.160 04:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.160 04:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69931 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69931 ']' 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.160 04:52:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.160 [2024-11-21 04:52:00.754265] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:44.160 [2024-11-21 04:52:00.754399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69931 ] 00:05:44.420 [2024-11-21 04:52:00.922646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.420 [2024-11-21 04:52:00.948045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.012 04:52:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.012 04:52:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.012 04:52:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:45.271 04:52:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69931 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69931 ']' 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69931 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69931 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69931' 00:05:45.271 killing process with pid 69931 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 69931 00:05:45.271 04:52:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 69931 00:05:45.529 00:05:45.529 real 0m1.746s 00:05:45.529 user 0m1.747s 00:05:45.529 sys 0m0.511s 00:05:45.529 04:52:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.529 04:52:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.529 ************************************ 00:05:45.529 END TEST alias_rpc 00:05:45.529 ************************************ 00:05:45.529 04:52:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:45.529 04:52:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:45.529 04:52:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.529 04:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.529 04:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.529 ************************************ 00:05:45.529 START TEST spdkcli_tcp 00:05:45.529 ************************************ 00:05:45.529 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:45.787 * Looking for test storage... 00:05:45.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:45.787 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.787 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.788 04:52:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.788 --rc genhtml_branch_coverage=1 00:05:45.788 --rc genhtml_function_coverage=1 00:05:45.788 --rc genhtml_legend=1 00:05:45.788 --rc geninfo_all_blocks=1 00:05:45.788 --rc geninfo_unexecuted_blocks=1 00:05:45.788 00:05:45.788 ' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.788 --rc genhtml_branch_coverage=1 00:05:45.788 --rc genhtml_function_coverage=1 00:05:45.788 --rc genhtml_legend=1 00:05:45.788 --rc geninfo_all_blocks=1 00:05:45.788 --rc geninfo_unexecuted_blocks=1 00:05:45.788 00:05:45.788 ' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.788 --rc genhtml_branch_coverage=1 00:05:45.788 --rc genhtml_function_coverage=1 00:05:45.788 --rc genhtml_legend=1 00:05:45.788 --rc geninfo_all_blocks=1 00:05:45.788 --rc geninfo_unexecuted_blocks=1 00:05:45.788 00:05:45.788 ' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.788 --rc genhtml_branch_coverage=1 00:05:45.788 --rc genhtml_function_coverage=1 00:05:45.788 --rc genhtml_legend=1 00:05:45.788 --rc geninfo_all_blocks=1 00:05:45.788 --rc geninfo_unexecuted_blocks=1 00:05:45.788 00:05:45.788 ' 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70016 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70016 00:05:45.788 04:52:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70016 ']' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.788 04:52:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.047 [2024-11-21 04:52:02.574662] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:46.047 [2024-11-21 04:52:02.574805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70016 ] 00:05:46.047 [2024-11-21 04:52:02.743607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.047 [2024-11-21 04:52:02.769709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.047 [2024-11-21 04:52:02.769828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.980 04:52:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.980 04:52:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:46.980 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.980 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70022 00:05:46.980 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.980 [ 00:05:46.981 "bdev_malloc_delete", 00:05:46.981 "bdev_malloc_create", 00:05:46.981 "bdev_null_resize", 00:05:46.981 "bdev_null_delete", 00:05:46.981 "bdev_null_create", 00:05:46.981 "bdev_nvme_cuse_unregister", 00:05:46.981 "bdev_nvme_cuse_register", 00:05:46.981 "bdev_opal_new_user", 00:05:46.981 "bdev_opal_set_lock_state", 00:05:46.981 "bdev_opal_delete", 00:05:46.981 "bdev_opal_get_info", 00:05:46.981 "bdev_opal_create", 00:05:46.981 "bdev_nvme_opal_revert", 00:05:46.981 "bdev_nvme_opal_init", 00:05:46.981 "bdev_nvme_send_cmd", 00:05:46.981 "bdev_nvme_set_keys", 00:05:46.981 "bdev_nvme_get_path_iostat", 00:05:46.981 "bdev_nvme_get_mdns_discovery_info", 00:05:46.981 "bdev_nvme_stop_mdns_discovery", 00:05:46.981 "bdev_nvme_start_mdns_discovery", 00:05:46.981 "bdev_nvme_set_multipath_policy", 00:05:46.981 "bdev_nvme_set_preferred_path", 00:05:46.981 "bdev_nvme_get_io_paths", 00:05:46.981 "bdev_nvme_remove_error_injection", 00:05:46.981 "bdev_nvme_add_error_injection", 00:05:46.981 "bdev_nvme_get_discovery_info", 00:05:46.981 "bdev_nvme_stop_discovery", 00:05:46.981 "bdev_nvme_start_discovery", 00:05:46.981 "bdev_nvme_get_controller_health_info", 00:05:46.981 "bdev_nvme_disable_controller", 00:05:46.981 "bdev_nvme_enable_controller", 00:05:46.981 "bdev_nvme_reset_controller", 00:05:46.981 "bdev_nvme_get_transport_statistics", 00:05:46.981 "bdev_nvme_apply_firmware", 00:05:46.981 "bdev_nvme_detach_controller", 00:05:46.981 "bdev_nvme_get_controllers", 00:05:46.981 "bdev_nvme_attach_controller", 00:05:46.981 "bdev_nvme_set_hotplug", 00:05:46.981 "bdev_nvme_set_options", 00:05:46.981 "bdev_passthru_delete", 00:05:46.981 "bdev_passthru_create", 00:05:46.981 "bdev_lvol_set_parent_bdev", 00:05:46.981 "bdev_lvol_set_parent", 00:05:46.981 "bdev_lvol_check_shallow_copy", 00:05:46.981 "bdev_lvol_start_shallow_copy", 00:05:46.981 "bdev_lvol_grow_lvstore", 00:05:46.981 "bdev_lvol_get_lvols", 00:05:46.981 "bdev_lvol_get_lvstores", 00:05:46.981 "bdev_lvol_delete", 00:05:46.981 "bdev_lvol_set_read_only", 00:05:46.981 "bdev_lvol_resize", 00:05:46.981 "bdev_lvol_decouple_parent", 00:05:46.981 "bdev_lvol_inflate", 00:05:46.981 "bdev_lvol_rename", 00:05:46.981 "bdev_lvol_clone_bdev", 00:05:46.981 "bdev_lvol_clone", 00:05:46.981 "bdev_lvol_snapshot", 00:05:46.981 "bdev_lvol_create", 00:05:46.981 "bdev_lvol_delete_lvstore", 00:05:46.981 "bdev_lvol_rename_lvstore", 00:05:46.981 "bdev_lvol_create_lvstore", 00:05:46.981 "bdev_raid_set_options", 00:05:46.981 "bdev_raid_remove_base_bdev", 00:05:46.981 "bdev_raid_add_base_bdev", 00:05:46.981 "bdev_raid_delete", 00:05:46.981 "bdev_raid_create", 00:05:46.981 "bdev_raid_get_bdevs", 00:05:46.981 "bdev_error_inject_error", 00:05:46.981 "bdev_error_delete", 00:05:46.981 "bdev_error_create", 00:05:46.981 "bdev_split_delete", 00:05:46.981 "bdev_split_create", 00:05:46.981 "bdev_delay_delete", 00:05:46.981 "bdev_delay_create", 00:05:46.981 "bdev_delay_update_latency", 00:05:46.981 "bdev_zone_block_delete", 00:05:46.981 "bdev_zone_block_create", 00:05:46.981 "blobfs_create", 00:05:46.981 "blobfs_detect", 00:05:46.981 "blobfs_set_cache_size", 00:05:46.981 "bdev_aio_delete", 00:05:46.981 "bdev_aio_rescan", 00:05:46.981 "bdev_aio_create", 00:05:46.981 "bdev_ftl_set_property", 00:05:46.981 "bdev_ftl_get_properties", 00:05:46.981 "bdev_ftl_get_stats", 00:05:46.981 "bdev_ftl_unmap", 00:05:46.981 "bdev_ftl_unload", 00:05:46.981 "bdev_ftl_delete", 00:05:46.981 "bdev_ftl_load", 00:05:46.981 "bdev_ftl_create", 00:05:46.981 "bdev_virtio_attach_controller", 00:05:46.981 "bdev_virtio_scsi_get_devices", 00:05:46.981 "bdev_virtio_detach_controller", 00:05:46.981 "bdev_virtio_blk_set_hotplug", 00:05:46.981 "bdev_iscsi_delete", 00:05:46.981 "bdev_iscsi_create", 00:05:46.981 "bdev_iscsi_set_options", 00:05:46.981 "accel_error_inject_error", 00:05:46.981 "ioat_scan_accel_module", 00:05:46.981 "dsa_scan_accel_module", 00:05:46.981 "iaa_scan_accel_module", 00:05:46.981 "keyring_file_remove_key", 00:05:46.981 "keyring_file_add_key", 00:05:46.981 "keyring_linux_set_options", 00:05:46.981 "fsdev_aio_delete", 00:05:46.981 "fsdev_aio_create", 00:05:46.981 "iscsi_get_histogram", 00:05:46.981 "iscsi_enable_histogram", 00:05:46.981 "iscsi_set_options", 00:05:46.981 "iscsi_get_auth_groups", 00:05:46.981 "iscsi_auth_group_remove_secret", 00:05:46.981 "iscsi_auth_group_add_secret", 00:05:46.981 "iscsi_delete_auth_group", 00:05:46.981 "iscsi_create_auth_group", 00:05:46.981 "iscsi_set_discovery_auth", 00:05:46.981 "iscsi_get_options", 00:05:46.981 "iscsi_target_node_request_logout", 00:05:46.981 "iscsi_target_node_set_redirect", 00:05:46.981 "iscsi_target_node_set_auth", 00:05:46.981 "iscsi_target_node_add_lun", 00:05:46.981 "iscsi_get_stats", 00:05:46.981 "iscsi_get_connections", 00:05:46.981 "iscsi_portal_group_set_auth", 00:05:46.981 "iscsi_start_portal_group", 00:05:46.981 "iscsi_delete_portal_group", 00:05:46.981 "iscsi_create_portal_group", 00:05:46.981 "iscsi_get_portal_groups", 00:05:46.981 "iscsi_delete_target_node", 00:05:46.981 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.981 "iscsi_target_node_add_pg_ig_maps", 00:05:46.981 "iscsi_create_target_node", 00:05:46.981 "iscsi_get_target_nodes", 00:05:46.981 "iscsi_delete_initiator_group", 00:05:46.981 "iscsi_initiator_group_remove_initiators", 00:05:46.981 "iscsi_initiator_group_add_initiators", 00:05:46.981 "iscsi_create_initiator_group", 00:05:46.981 "iscsi_get_initiator_groups", 00:05:46.981 "nvmf_set_crdt", 00:05:46.981 "nvmf_set_config", 00:05:46.981 "nvmf_set_max_subsystems", 00:05:46.981 "nvmf_stop_mdns_prr", 00:05:46.981 "nvmf_publish_mdns_prr", 00:05:46.981 "nvmf_subsystem_get_listeners", 00:05:46.981 "nvmf_subsystem_get_qpairs", 00:05:46.981 "nvmf_subsystem_get_controllers", 00:05:46.981 "nvmf_get_stats", 00:05:46.981 "nvmf_get_transports", 00:05:46.981 "nvmf_create_transport", 00:05:46.981 "nvmf_get_targets", 00:05:46.981 "nvmf_delete_target", 00:05:46.981 "nvmf_create_target", 00:05:46.981 "nvmf_subsystem_allow_any_host", 00:05:46.981 "nvmf_subsystem_set_keys", 00:05:46.981 "nvmf_subsystem_remove_host", 00:05:46.981 "nvmf_subsystem_add_host", 00:05:46.981 "nvmf_ns_remove_host", 00:05:46.981 "nvmf_ns_add_host", 00:05:46.981 "nvmf_subsystem_remove_ns", 00:05:46.981 "nvmf_subsystem_set_ns_ana_group", 00:05:46.981 "nvmf_subsystem_add_ns", 00:05:46.981 "nvmf_subsystem_listener_set_ana_state", 00:05:46.981 "nvmf_discovery_get_referrals", 00:05:46.981 "nvmf_discovery_remove_referral", 00:05:46.981 "nvmf_discovery_add_referral", 00:05:46.981 "nvmf_subsystem_remove_listener", 00:05:46.981 "nvmf_subsystem_add_listener", 00:05:46.981 "nvmf_delete_subsystem", 00:05:46.981 "nvmf_create_subsystem", 00:05:46.981 "nvmf_get_subsystems", 00:05:46.981 "env_dpdk_get_mem_stats", 00:05:46.981 "nbd_get_disks", 00:05:46.981 "nbd_stop_disk", 00:05:46.981 "nbd_start_disk", 00:05:46.981 "ublk_recover_disk", 00:05:46.981 "ublk_get_disks", 00:05:46.981 "ublk_stop_disk", 00:05:46.981 "ublk_start_disk", 00:05:46.981 "ublk_destroy_target", 00:05:46.981 "ublk_create_target", 00:05:46.981 "virtio_blk_create_transport", 00:05:46.981 "virtio_blk_get_transports", 00:05:46.982 "vhost_controller_set_coalescing", 00:05:46.982 "vhost_get_controllers", 00:05:46.982 "vhost_delete_controller", 00:05:46.982 "vhost_create_blk_controller", 00:05:46.982 "vhost_scsi_controller_remove_target", 00:05:46.982 "vhost_scsi_controller_add_target", 00:05:46.982 "vhost_start_scsi_controller", 00:05:46.982 "vhost_create_scsi_controller", 00:05:46.982 "thread_set_cpumask", 00:05:46.982 "scheduler_set_options", 00:05:46.982 "framework_get_governor", 00:05:46.982 "framework_get_scheduler", 00:05:46.982 "framework_set_scheduler", 00:05:46.982 "framework_get_reactors", 00:05:46.982 "thread_get_io_channels", 00:05:46.982 "thread_get_pollers", 00:05:46.982 "thread_get_stats", 00:05:46.982 "framework_monitor_context_switch", 00:05:46.982 "spdk_kill_instance", 00:05:46.982 "log_enable_timestamps", 00:05:46.982 "log_get_flags", 00:05:46.982 "log_clear_flag", 00:05:46.982 "log_set_flag", 00:05:46.982 "log_get_level", 00:05:46.982 "log_set_level", 00:05:46.982 "log_get_print_level", 00:05:46.982 "log_set_print_level", 00:05:46.982 "framework_enable_cpumask_locks", 00:05:46.982 "framework_disable_cpumask_locks", 00:05:46.982 "framework_wait_init", 00:05:46.982 "framework_start_init", 00:05:46.982 "scsi_get_devices", 00:05:46.982 "bdev_get_histogram", 00:05:46.982 "bdev_enable_histogram", 00:05:46.982 "bdev_set_qos_limit", 00:05:46.982 "bdev_set_qd_sampling_period", 00:05:46.982 "bdev_get_bdevs", 00:05:46.982 "bdev_reset_iostat", 00:05:46.982 "bdev_get_iostat", 00:05:46.982 "bdev_examine", 00:05:46.982 "bdev_wait_for_examine", 00:05:46.982 "bdev_set_options", 00:05:46.982 "accel_get_stats", 00:05:46.982 "accel_set_options", 00:05:46.982 "accel_set_driver", 00:05:46.982 "accel_crypto_key_destroy", 00:05:46.982 "accel_crypto_keys_get", 00:05:46.982 "accel_crypto_key_create", 00:05:46.982 "accel_assign_opc", 00:05:46.982 "accel_get_module_info", 00:05:46.982 "accel_get_opc_assignments", 00:05:46.982 "vmd_rescan", 00:05:46.982 "vmd_remove_device", 00:05:46.982 "vmd_enable", 00:05:46.982 "sock_get_default_impl", 00:05:46.982 "sock_set_default_impl", 00:05:46.982 "sock_impl_set_options", 00:05:46.982 "sock_impl_get_options", 00:05:46.982 "iobuf_get_stats", 00:05:46.982 "iobuf_set_options", 00:05:46.982 "keyring_get_keys", 00:05:46.982 "framework_get_pci_devices", 00:05:46.982 "framework_get_config", 00:05:46.982 "framework_get_subsystems", 00:05:46.982 "fsdev_set_opts", 00:05:46.982 "fsdev_get_opts", 00:05:46.982 "trace_get_info", 00:05:46.982 "trace_get_tpoint_group_mask", 00:05:46.982 "trace_disable_tpoint_group", 00:05:46.982 "trace_enable_tpoint_group", 00:05:46.982 "trace_clear_tpoint_mask", 00:05:46.982 "trace_set_tpoint_mask", 00:05:46.982 "notify_get_notifications", 00:05:46.982 "notify_get_types", 00:05:46.982 "spdk_get_version", 00:05:46.982 "rpc_get_methods" 00:05:46.982 ] 00:05:46.982 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.982 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.982 04:52:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70016 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70016 ']' 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70016 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70016 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.982 killing process with pid 70016 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70016' 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70016 00:05:46.982 04:52:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70016 00:05:47.550 00:05:47.550 real 0m1.810s 00:05:47.550 user 0m2.990s 00:05:47.550 sys 0m0.580s 00:05:47.550 04:52:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.550 04:52:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.550 ************************************ 00:05:47.550 END TEST spdkcli_tcp 00:05:47.550 ************************************ 00:05:47.551 04:52:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.551 04:52:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.551 04:52:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.551 04:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:47.551 ************************************ 00:05:47.551 START TEST dpdk_mem_utility 00:05:47.551 ************************************ 00:05:47.551 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.551 * Looking for test storage... 00:05:47.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:47.551 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.551 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.551 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.810 04:52:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.810 --rc genhtml_branch_coverage=1 00:05:47.810 --rc genhtml_function_coverage=1 00:05:47.810 --rc genhtml_legend=1 00:05:47.810 --rc geninfo_all_blocks=1 00:05:47.810 --rc geninfo_unexecuted_blocks=1 00:05:47.810 00:05:47.810 ' 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.810 --rc genhtml_branch_coverage=1 00:05:47.810 --rc genhtml_function_coverage=1 00:05:47.810 --rc genhtml_legend=1 00:05:47.810 --rc geninfo_all_blocks=1 00:05:47.810 --rc geninfo_unexecuted_blocks=1 00:05:47.810 00:05:47.810 ' 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.810 --rc genhtml_branch_coverage=1 00:05:47.810 --rc genhtml_function_coverage=1 00:05:47.810 --rc genhtml_legend=1 00:05:47.810 --rc geninfo_all_blocks=1 00:05:47.810 --rc geninfo_unexecuted_blocks=1 00:05:47.810 00:05:47.810 ' 00:05:47.810 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.810 --rc genhtml_branch_coverage=1 00:05:47.810 --rc genhtml_function_coverage=1 00:05:47.810 --rc genhtml_legend=1 00:05:47.810 --rc geninfo_all_blocks=1 00:05:47.810 --rc geninfo_unexecuted_blocks=1 00:05:47.810 00:05:47.810 ' 00:05:47.811 04:52:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.811 04:52:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70105 00:05:47.811 04:52:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.811 04:52:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70105 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70105 ']' 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.811 04:52:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.811 [2024-11-21 04:52:04.432017] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:47.811 [2024-11-21 04:52:04.432157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:05:48.070 [2024-11-21 04:52:04.580577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.070 [2024-11-21 04:52:04.615031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.655 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.655 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:48.655 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.655 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.655 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.655 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.655 { 00:05:48.655 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.655 } 00:05:48.655 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.655 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.655 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:48.655 1 heaps totaling size 810.000000 MiB 00:05:48.655 size: 810.000000 MiB heap id: 0 00:05:48.655 end heaps---------- 00:05:48.655 9 mempools totaling size 595.772034 MiB 00:05:48.655 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.655 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.655 size: 92.545471 MiB name: bdev_io_70105 00:05:48.655 size: 50.003479 MiB name: msgpool_70105 00:05:48.655 size: 36.509338 MiB name: fsdev_io_70105 00:05:48.655 size: 21.763794 MiB name: PDU_Pool 00:05:48.655 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.655 size: 4.133484 MiB name: evtpool_70105 00:05:48.655 size: 0.026123 MiB name: Session_Pool 00:05:48.655 end mempools------- 00:05:48.655 6 memzones totaling size 4.142822 MiB 00:05:48.655 size: 1.000366 MiB name: RG_ring_0_70105 00:05:48.655 size: 1.000366 MiB name: RG_ring_1_70105 00:05:48.655 size: 1.000366 MiB name: RG_ring_4_70105 00:05:48.655 size: 1.000366 MiB name: RG_ring_5_70105 00:05:48.655 size: 0.125366 MiB name: RG_ring_2_70105 00:05:48.655 size: 0.015991 MiB name: RG_ring_3_70105 00:05:48.655 end memzones------- 00:05:48.655 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.655 heap id: 0 total size: 810.000000 MiB number of busy elements: 312 number of free elements: 15 00:05:48.655 list of free elements. size: 10.813416 MiB 00:05:48.655 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:48.655 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:48.655 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:48.655 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:48.655 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:48.655 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:48.655 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:48.655 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:48.655 element at address: 0x20001a600000 with size: 0.567322 MiB 00:05:48.655 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:48.655 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:48.655 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:48.655 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:48.655 element at address: 0x200027a00000 with size: 0.396301 MiB 00:05:48.655 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:48.655 list of standard malloc elements. size: 199.267700 MiB 00:05:48.655 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:48.655 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:48.655 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:48.655 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:48.655 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:48.655 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:48.655 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:48.655 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:48.655 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:48.655 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:48.655 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:48.656 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:48.657 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a65740 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a65800 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c400 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:48.657 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:48.657 list of memzone associated elements. size: 599.918884 MiB 00:05:48.657 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:48.657 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.657 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:48.657 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.657 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:48.657 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70105_0 00:05:48.657 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:48.657 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70105_0 00:05:48.657 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:48.657 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70105_0 00:05:48.657 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:48.657 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.657 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:48.657 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.657 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:48.657 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70105_0 00:05:48.657 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:48.657 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70105 00:05:48.657 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:48.657 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70105 00:05:48.657 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:48.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.657 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:48.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.657 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:48.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.657 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:48.657 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.657 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:48.658 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70105 00:05:48.658 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:48.658 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70105 00:05:48.658 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:48.658 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70105 00:05:48.658 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:48.658 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70105 00:05:48.658 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:48.658 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70105 00:05:48.658 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:48.658 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70105 00:05:48.658 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:48.658 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.658 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:48.658 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.658 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:48.658 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.658 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:48.658 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70105 00:05:48.658 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:48.658 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70105 00:05:48.658 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:48.658 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.658 element at address: 0x200027a658c0 with size: 0.023743 MiB 00:05:48.658 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.658 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:48.658 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70105 00:05:48.658 element at address: 0x200027a6ba00 with size: 0.002441 MiB 00:05:48.658 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.658 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:48.658 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70105 00:05:48.658 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:48.658 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70105 00:05:48.658 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:48.658 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70105 00:05:48.658 element at address: 0x200027a6c4c0 with size: 0.000305 MiB 00:05:48.658 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.658 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.658 04:52:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70105 00:05:48.658 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70105 ']' 00:05:48.658 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70105 00:05:48.658 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:48.658 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.658 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70105 00:05:48.918 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.918 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.918 killing process with pid 70105 00:05:48.918 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70105' 00:05:48.918 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70105 00:05:48.918 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70105 00:05:49.177 00:05:49.177 real 0m1.642s 00:05:49.177 user 0m1.583s 00:05:49.177 sys 0m0.505s 00:05:49.177 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.177 04:52:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.177 ************************************ 00:05:49.177 END TEST dpdk_mem_utility 00:05:49.177 ************************************ 00:05:49.177 04:52:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.177 04:52:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.178 04:52:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.178 04:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:49.178 ************************************ 00:05:49.178 START TEST event 00:05:49.178 ************************************ 00:05:49.178 04:52:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.438 * Looking for test storage... 00:05:49.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.438 04:52:05 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.438 04:52:05 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.438 04:52:05 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.438 04:52:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.438 04:52:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.438 04:52:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.438 04:52:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.438 04:52:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.438 04:52:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.438 04:52:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.438 04:52:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.438 04:52:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.438 04:52:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.438 04:52:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.438 04:52:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.438 04:52:06 event -- scripts/common.sh@345 -- # : 1 00:05:49.438 04:52:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.438 04:52:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.438 04:52:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.438 04:52:06 event -- scripts/common.sh@353 -- # local d=1 00:05:49.438 04:52:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.438 04:52:06 event -- scripts/common.sh@355 -- # echo 1 00:05:49.438 04:52:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.438 04:52:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.438 04:52:06 event -- scripts/common.sh@353 -- # local d=2 00:05:49.438 04:52:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.438 04:52:06 event -- scripts/common.sh@355 -- # echo 2 00:05:49.438 04:52:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.438 04:52:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.438 04:52:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.438 04:52:06 event -- scripts/common.sh@368 -- # return 0 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.438 --rc genhtml_branch_coverage=1 00:05:49.438 --rc genhtml_function_coverage=1 00:05:49.438 --rc genhtml_legend=1 00:05:49.438 --rc geninfo_all_blocks=1 00:05:49.438 --rc geninfo_unexecuted_blocks=1 00:05:49.438 00:05:49.438 ' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.438 --rc genhtml_branch_coverage=1 00:05:49.438 --rc genhtml_function_coverage=1 00:05:49.438 --rc genhtml_legend=1 00:05:49.438 --rc geninfo_all_blocks=1 00:05:49.438 --rc geninfo_unexecuted_blocks=1 00:05:49.438 00:05:49.438 ' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.438 --rc genhtml_branch_coverage=1 00:05:49.438 --rc genhtml_function_coverage=1 00:05:49.438 --rc genhtml_legend=1 00:05:49.438 --rc geninfo_all_blocks=1 00:05:49.438 --rc geninfo_unexecuted_blocks=1 00:05:49.438 00:05:49.438 ' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.438 --rc genhtml_branch_coverage=1 00:05:49.438 --rc genhtml_function_coverage=1 00:05:49.438 --rc genhtml_legend=1 00:05:49.438 --rc geninfo_all_blocks=1 00:05:49.438 --rc geninfo_unexecuted_blocks=1 00:05:49.438 00:05:49.438 ' 00:05:49.438 04:52:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.438 04:52:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.438 04:52:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:49.438 04:52:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.438 04:52:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.438 ************************************ 00:05:49.438 START TEST event_perf 00:05:49.438 ************************************ 00:05:49.438 04:52:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.438 Running I/O for 1 seconds...[2024-11-21 04:52:06.104394] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:49.438 [2024-11-21 04:52:06.104504] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70191 ] 00:05:49.698 [2024-11-21 04:52:06.276077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.698 [2024-11-21 04:52:06.305196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.698 [2024-11-21 04:52:06.305388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.698 [2024-11-21 04:52:06.305414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.698 [2024-11-21 04:52:06.305543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.638 Running I/O for 1 seconds... 00:05:50.638 lcore 0: 109162 00:05:50.638 lcore 1: 109161 00:05:50.638 lcore 2: 109164 00:05:50.638 lcore 3: 109163 00:05:50.638 done. 00:05:50.898 00:05:50.898 real 0m1.312s 00:05:50.898 user 0m4.085s 00:05:50.898 sys 0m0.108s 00:05:50.898 04:52:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.898 04:52:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.898 ************************************ 00:05:50.898 END TEST event_perf 00:05:50.898 ************************************ 00:05:50.898 04:52:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.898 04:52:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.898 04:52:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.898 04:52:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.898 ************************************ 00:05:50.898 START TEST event_reactor 00:05:50.898 ************************************ 00:05:50.898 04:52:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.898 [2024-11-21 04:52:07.481943] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:50.898 [2024-11-21 04:52:07.482065] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70225 ] 00:05:51.158 [2024-11-21 04:52:07.645494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.158 [2024-11-21 04:52:07.673191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.096 test_start 00:05:52.097 oneshot 00:05:52.097 tick 100 00:05:52.097 tick 100 00:05:52.097 tick 250 00:05:52.097 tick 100 00:05:52.097 tick 100 00:05:52.097 tick 100 00:05:52.097 tick 250 00:05:52.097 tick 500 00:05:52.097 tick 100 00:05:52.097 tick 100 00:05:52.097 tick 250 00:05:52.097 tick 100 00:05:52.097 tick 100 00:05:52.097 test_end 00:05:52.097 00:05:52.097 real 0m1.298s 00:05:52.097 user 0m1.097s 00:05:52.097 sys 0m0.094s 00:05:52.097 04:52:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.097 04:52:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.097 ************************************ 00:05:52.097 END TEST event_reactor 00:05:52.097 ************************************ 00:05:52.097 04:52:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.097 04:52:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:52.097 04:52:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.097 04:52:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.097 ************************************ 00:05:52.097 START TEST event_reactor_perf 00:05:52.097 ************************************ 00:05:52.097 04:52:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.357 [2024-11-21 04:52:08.841215] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:52.357 [2024-11-21 04:52:08.841327] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70267 ] 00:05:52.357 [2024-11-21 04:52:09.011524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.357 [2024-11-21 04:52:09.035942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.757 test_start 00:05:53.757 test_end 00:05:53.757 Performance: 395823 events per second 00:05:53.757 00:05:53.757 real 0m1.302s 00:05:53.757 user 0m1.110s 00:05:53.757 sys 0m0.085s 00:05:53.757 04:52:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.757 04:52:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.757 ************************************ 00:05:53.757 END TEST event_reactor_perf 00:05:53.757 ************************************ 00:05:53.757 04:52:10 event -- event/event.sh@49 -- # uname -s 00:05:53.757 04:52:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.757 04:52:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.757 04:52:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.757 04:52:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.757 04:52:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.757 ************************************ 00:05:53.757 START TEST event_scheduler 00:05:53.757 ************************************ 00:05:53.757 04:52:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.757 * Looking for test storage... 00:05:53.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:53.757 04:52:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.757 04:52:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.757 04:52:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.758 04:52:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.758 --rc genhtml_branch_coverage=1 00:05:53.758 --rc genhtml_function_coverage=1 00:05:53.758 --rc genhtml_legend=1 00:05:53.758 --rc geninfo_all_blocks=1 00:05:53.758 --rc geninfo_unexecuted_blocks=1 00:05:53.758 00:05:53.758 ' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.758 --rc genhtml_branch_coverage=1 00:05:53.758 --rc genhtml_function_coverage=1 00:05:53.758 --rc genhtml_legend=1 00:05:53.758 --rc geninfo_all_blocks=1 00:05:53.758 --rc geninfo_unexecuted_blocks=1 00:05:53.758 00:05:53.758 ' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.758 --rc genhtml_branch_coverage=1 00:05:53.758 --rc genhtml_function_coverage=1 00:05:53.758 --rc genhtml_legend=1 00:05:53.758 --rc geninfo_all_blocks=1 00:05:53.758 --rc geninfo_unexecuted_blocks=1 00:05:53.758 00:05:53.758 ' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.758 --rc genhtml_branch_coverage=1 00:05:53.758 --rc genhtml_function_coverage=1 00:05:53.758 --rc genhtml_legend=1 00:05:53.758 --rc geninfo_all_blocks=1 00:05:53.758 --rc geninfo_unexecuted_blocks=1 00:05:53.758 00:05:53.758 ' 00:05:53.758 04:52:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.758 04:52:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70332 00:05:53.758 04:52:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.758 04:52:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.758 04:52:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70332 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70332 ']' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.758 04:52:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.758 [2024-11-21 04:52:10.474488] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:05:53.758 [2024-11-21 04:52:10.474641] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70332 ] 00:05:54.017 [2024-11-21 04:52:10.647983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.017 [2024-11-21 04:52:10.700461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.017 [2024-11-21 04:52:10.700642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.017 [2024-11-21 04:52:10.700660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.017 [2024-11-21 04:52:10.700755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:54.584 04:52:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.584 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.584 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.584 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.584 POWER: Cannot set governor of lcore 0 to performance 00:05:54.584 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.584 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.584 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.584 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.584 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:54.584 POWER: Unable to set Power Management Environment for lcore 0 00:05:54.584 [2024-11-21 04:52:11.309782] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:54.584 [2024-11-21 04:52:11.309821] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:54.584 [2024-11-21 04:52:11.309861] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.584 [2024-11-21 04:52:11.309889] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.584 [2024-11-21 04:52:11.309904] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.584 [2024-11-21 04:52:11.309921] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.584 04:52:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.584 04:52:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 [2024-11-21 04:52:11.439939] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.843 04:52:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.843 04:52:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.843 04:52:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 ************************************ 00:05:54.843 START TEST scheduler_create_thread 00:05:54.843 ************************************ 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 2 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 3 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 4 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 5 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 6 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 7 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 8 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.843 9 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.843 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.412 10 00:05:55.412 04:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.412 04:52:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.412 04:52:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.412 04:52:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.788 04:52:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.788 04:52:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.788 04:52:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.788 04:52:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.788 04:52:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.725 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.725 04:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.725 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.725 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.295 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.295 04:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.295 04:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.295 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.295 04:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.231 04:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.231 00:05:59.231 real 0m4.213s 00:05:59.231 user 0m0.029s 00:05:59.231 sys 0m0.009s 00:05:59.231 04:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.231 04:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.231 ************************************ 00:05:59.231 END TEST scheduler_create_thread 00:05:59.231 ************************************ 00:05:59.231 04:52:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.231 04:52:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70332 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70332 ']' 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70332 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70332 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.231 killing process with pid 70332 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70332' 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70332 00:05:59.231 04:52:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70332 00:05:59.490 [2024-11-21 04:52:16.046460] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.748 ************************************ 00:05:59.748 END TEST event_scheduler 00:05:59.748 ************************************ 00:05:59.748 00:05:59.748 real 0m6.264s 00:05:59.748 user 0m13.860s 00:05:59.748 sys 0m0.573s 00:05:59.748 04:52:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.748 04:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.007 04:52:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.007 04:52:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.007 04:52:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.007 04:52:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.007 04:52:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.007 ************************************ 00:06:00.007 START TEST app_repeat 00:06:00.007 ************************************ 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70449 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.007 Process app_repeat pid: 70449 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70449' 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.007 spdk_app_start Round 0 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.007 04:52:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70449 /var/tmp/spdk-nbd.sock 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70449 ']' 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.007 04:52:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.007 [2024-11-21 04:52:16.575365] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:00.007 [2024-11-21 04:52:16.575523] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70449 ] 00:06:00.267 [2024-11-21 04:52:16.749142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.267 [2024-11-21 04:52:16.777670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.267 [2024-11-21 04:52:16.777780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.847 04:52:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.847 04:52:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.847 04:52:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.126 Malloc0 00:06:01.126 04:52:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.385 Malloc1 00:06:01.385 04:52:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.385 04:52:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.385 /dev/nbd0 00:06:01.643 04:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.643 04:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.644 1+0 records in 00:06:01.644 1+0 records out 00:06:01.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397317 s, 10.3 MB/s 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.644 04:52:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.644 04:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.644 04:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.644 04:52:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.644 /dev/nbd1 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.902 1+0 records in 00:06:01.902 1+0 records out 00:06:01.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377925 s, 10.8 MB/s 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.902 04:52:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.902 { 00:06:01.902 "nbd_device": "/dev/nbd0", 00:06:01.902 "bdev_name": "Malloc0" 00:06:01.902 }, 00:06:01.902 { 00:06:01.902 "nbd_device": "/dev/nbd1", 00:06:01.902 "bdev_name": "Malloc1" 00:06:01.902 } 00:06:01.902 ]' 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.902 { 00:06:01.902 "nbd_device": "/dev/nbd0", 00:06:01.902 "bdev_name": "Malloc0" 00:06:01.902 }, 00:06:01.902 { 00:06:01.902 "nbd_device": "/dev/nbd1", 00:06:01.902 "bdev_name": "Malloc1" 00:06:01.902 } 00:06:01.902 ]' 00:06:01.902 04:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.161 /dev/nbd1' 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.161 /dev/nbd1' 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.161 04:52:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.162 256+0 records in 00:06:02.162 256+0 records out 00:06:02.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431915 s, 243 MB/s 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.162 256+0 records in 00:06:02.162 256+0 records out 00:06:02.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023101 s, 45.4 MB/s 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.162 256+0 records in 00:06:02.162 256+0 records out 00:06:02.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291199 s, 36.0 MB/s 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.162 04:52:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.420 04:52:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.678 04:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.936 04:52:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.936 04:52:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.193 04:52:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.193 [2024-11-21 04:52:19.805290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.193 [2024-11-21 04:52:19.828968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.193 [2024-11-21 04:52:19.828975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.193 [2024-11-21 04:52:19.870735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.193 [2024-11-21 04:52:19.870819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.474 spdk_app_start Round 1 00:06:06.474 04:52:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.474 04:52:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.474 04:52:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70449 /var/tmp/spdk-nbd.sock 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70449 ']' 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.474 04:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.474 04:52:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.474 Malloc0 00:06:06.474 04:52:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.732 Malloc1 00:06:06.732 04:52:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.732 04:52:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.991 /dev/nbd0 00:06:06.991 04:52:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.991 04:52:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.991 1+0 records in 00:06:06.991 1+0 records out 00:06:06.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350515 s, 11.7 MB/s 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.991 04:52:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.991 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.991 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.991 04:52:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.249 /dev/nbd1 00:06:07.249 04:52:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.249 04:52:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.249 04:52:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.250 1+0 records in 00:06:07.250 1+0 records out 00:06:07.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367042 s, 11.2 MB/s 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.250 04:52:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.250 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.250 04:52:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.250 04:52:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.250 04:52:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.250 04:52:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.508 { 00:06:07.508 "nbd_device": "/dev/nbd0", 00:06:07.508 "bdev_name": "Malloc0" 00:06:07.508 }, 00:06:07.508 { 00:06:07.508 "nbd_device": "/dev/nbd1", 00:06:07.508 "bdev_name": "Malloc1" 00:06:07.508 } 00:06:07.508 ]' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.508 { 00:06:07.508 "nbd_device": "/dev/nbd0", 00:06:07.508 "bdev_name": "Malloc0" 00:06:07.508 }, 00:06:07.508 { 00:06:07.508 "nbd_device": "/dev/nbd1", 00:06:07.508 "bdev_name": "Malloc1" 00:06:07.508 } 00:06:07.508 ]' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.508 /dev/nbd1' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.508 /dev/nbd1' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.508 04:52:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.508 256+0 records in 00:06:07.508 256+0 records out 00:06:07.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129952 s, 80.7 MB/s 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.509 256+0 records in 00:06:07.509 256+0 records out 00:06:07.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241556 s, 43.4 MB/s 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.509 256+0 records in 00:06:07.509 256+0 records out 00:06:07.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281445 s, 37.3 MB/s 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.509 04:52:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.767 04:52:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.025 04:52:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.283 04:52:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.283 04:52:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.542 04:52:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.542 [2024-11-21 04:52:25.192966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.542 [2024-11-21 04:52:25.216230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.542 [2024-11-21 04:52:25.216265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.542 [2024-11-21 04:52:25.257251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.542 [2024-11-21 04:52:25.257327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.862 spdk_app_start Round 2 00:06:11.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.862 04:52:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.862 04:52:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.862 04:52:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70449 /var/tmp/spdk-nbd.sock 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70449 ']' 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.862 04:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.862 04:52:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.862 Malloc0 00:06:11.862 04:52:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.121 Malloc1 00:06:12.121 04:52:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.121 04:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.381 /dev/nbd0 00:06:12.381 04:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.381 04:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.381 1+0 records in 00:06:12.381 1+0 records out 00:06:12.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212172 s, 19.3 MB/s 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.381 04:52:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.381 04:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.381 04:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.381 04:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.638 /dev/nbd1 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.638 1+0 records in 00:06:12.638 1+0 records out 00:06:12.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362332 s, 11.3 MB/s 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.638 04:52:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.638 04:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.896 { 00:06:12.896 "nbd_device": "/dev/nbd0", 00:06:12.896 "bdev_name": "Malloc0" 00:06:12.896 }, 00:06:12.896 { 00:06:12.896 "nbd_device": "/dev/nbd1", 00:06:12.896 "bdev_name": "Malloc1" 00:06:12.896 } 00:06:12.896 ]' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.896 { 00:06:12.896 "nbd_device": "/dev/nbd0", 00:06:12.896 "bdev_name": "Malloc0" 00:06:12.896 }, 00:06:12.896 { 00:06:12.896 "nbd_device": "/dev/nbd1", 00:06:12.896 "bdev_name": "Malloc1" 00:06:12.896 } 00:06:12.896 ]' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.896 /dev/nbd1' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.896 /dev/nbd1' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.896 256+0 records in 00:06:12.896 256+0 records out 00:06:12.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131416 s, 79.8 MB/s 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.896 256+0 records in 00:06:12.896 256+0 records out 00:06:12.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215608 s, 48.6 MB/s 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.896 256+0 records in 00:06:12.896 256+0 records out 00:06:12.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242039 s, 43.3 MB/s 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.896 04:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.897 04:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.155 04:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.413 04:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.671 04:52:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.671 04:52:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.929 04:52:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.929 [2024-11-21 04:52:30.602278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.929 [2024-11-21 04:52:30.625654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.929 [2024-11-21 04:52:30.625659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.187 [2024-11-21 04:52:30.668061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.187 [2024-11-21 04:52:30.668126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.475 04:52:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70449 /var/tmp/spdk-nbd.sock 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70449 ']' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.475 04:52:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70449 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70449 ']' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70449 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70449 00:06:17.475 killing process with pid 70449 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70449' 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70449 00:06:17.475 04:52:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70449 00:06:17.475 spdk_app_start is called in Round 0. 00:06:17.475 Shutdown signal received, stop current app iteration 00:06:17.475 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:06:17.475 spdk_app_start is called in Round 1. 00:06:17.475 Shutdown signal received, stop current app iteration 00:06:17.475 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:06:17.475 spdk_app_start is called in Round 2. 00:06:17.475 Shutdown signal received, stop current app iteration 00:06:17.475 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:06:17.475 spdk_app_start is called in Round 3. 00:06:17.475 Shutdown signal received, stop current app iteration 00:06:17.475 04:52:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.476 04:52:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.476 00:06:17.476 real 0m17.385s 00:06:17.476 user 0m38.629s 00:06:17.476 sys 0m2.413s 00:06:17.476 04:52:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.476 04:52:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.476 ************************************ 00:06:17.476 END TEST app_repeat 00:06:17.476 ************************************ 00:06:17.476 04:52:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.476 04:52:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:17.476 04:52:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.476 04:52:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.476 04:52:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.476 ************************************ 00:06:17.476 START TEST cpu_locks 00:06:17.476 ************************************ 00:06:17.476 04:52:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:17.476 * Looking for test storage... 00:06:17.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.476 04:52:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.476 --rc genhtml_branch_coverage=1 00:06:17.476 --rc genhtml_function_coverage=1 00:06:17.476 --rc genhtml_legend=1 00:06:17.476 --rc geninfo_all_blocks=1 00:06:17.476 --rc geninfo_unexecuted_blocks=1 00:06:17.476 00:06:17.476 ' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.476 --rc genhtml_branch_coverage=1 00:06:17.476 --rc genhtml_function_coverage=1 00:06:17.476 --rc genhtml_legend=1 00:06:17.476 --rc geninfo_all_blocks=1 00:06:17.476 --rc geninfo_unexecuted_blocks=1 00:06:17.476 00:06:17.476 ' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.476 --rc genhtml_branch_coverage=1 00:06:17.476 --rc genhtml_function_coverage=1 00:06:17.476 --rc genhtml_legend=1 00:06:17.476 --rc geninfo_all_blocks=1 00:06:17.476 --rc geninfo_unexecuted_blocks=1 00:06:17.476 00:06:17.476 ' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.476 --rc genhtml_branch_coverage=1 00:06:17.476 --rc genhtml_function_coverage=1 00:06:17.476 --rc genhtml_legend=1 00:06:17.476 --rc geninfo_all_blocks=1 00:06:17.476 --rc geninfo_unexecuted_blocks=1 00:06:17.476 00:06:17.476 ' 00:06:17.476 04:52:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.476 04:52:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.476 04:52:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.476 04:52:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.476 04:52:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.476 ************************************ 00:06:17.476 START TEST default_locks 00:06:17.476 ************************************ 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70874 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70874 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70874 ']' 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.476 04:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.735 [2024-11-21 04:52:34.329033] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:17.735 [2024-11-21 04:52:34.329190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:06:17.993 [2024-11-21 04:52:34.497334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.993 [2024-11-21 04:52:34.522106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.561 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.561 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:18.561 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70874 00:06:18.561 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.561 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70874 ']' 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.821 killing process with pid 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70874' 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70874 00:06:18.821 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70874 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70874 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70874 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70874 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70874 ']' 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70874) - No such process 00:06:19.080 ERROR: process (pid: 70874) is no longer running 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.080 00:06:19.080 real 0m1.551s 00:06:19.080 user 0m1.498s 00:06:19.080 sys 0m0.558s 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.080 04:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 ************************************ 00:06:19.080 END TEST default_locks 00:06:19.080 ************************************ 00:06:19.080 04:52:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.080 04:52:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.080 04:52:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.080 04:52:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.339 ************************************ 00:06:19.339 START TEST default_locks_via_rpc 00:06:19.339 ************************************ 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70924 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70924 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70924 ']' 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.339 04:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.339 [2024-11-21 04:52:35.910874] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:19.339 [2024-11-21 04:52:35.910998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70924 ] 00:06:19.598 [2024-11-21 04:52:36.083033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.598 [2024-11-21 04:52:36.109124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70924 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70924 00:06:20.166 04:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70924 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70924 ']' 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70924 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70924 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.425 killing process with pid 70924 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70924' 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70924 00:06:20.425 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70924 00:06:20.994 00:06:20.994 real 0m1.800s 00:06:20.994 user 0m1.768s 00:06:20.994 sys 0m0.563s 00:06:20.994 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.994 04:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.994 ************************************ 00:06:20.994 END TEST default_locks_via_rpc 00:06:20.994 ************************************ 00:06:20.994 04:52:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.994 04:52:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.994 04:52:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.994 04:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.994 ************************************ 00:06:20.994 START TEST non_locking_app_on_locked_coremask 00:06:20.994 ************************************ 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70976 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70976 /var/tmp/spdk.sock 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70976 ']' 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.994 04:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.253 [2024-11-21 04:52:37.781817] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:21.253 [2024-11-21 04:52:37.781929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70976 ] 00:06:21.253 [2024-11-21 04:52:37.940507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.253 [2024-11-21 04:52:37.965876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70992 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70992 /var/tmp/spdk2.sock 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70992 ']' 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.190 04:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.190 [2024-11-21 04:52:38.684785] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:22.190 [2024-11-21 04:52:38.684903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:06:22.190 [2024-11-21 04:52:38.850132] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.190 [2024-11-21 04:52:38.850191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.190 [2024-11-21 04:52:38.904411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.125 04:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.125 04:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.125 04:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70976 00:06:23.125 04:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70976 00:06:23.125 04:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70976 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70976 ']' 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70976 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70976 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.692 killing process with pid 70976 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70976' 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70976 00:06:23.692 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70976 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70992 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70992 ']' 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70992 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70992 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.260 killing process with pid 70992 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70992' 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70992 00:06:24.260 04:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70992 00:06:24.828 00:06:24.828 real 0m3.639s 00:06:24.828 user 0m3.880s 00:06:24.828 sys 0m1.116s 00:06:24.828 04:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.828 04:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 ************************************ 00:06:24.828 END TEST non_locking_app_on_locked_coremask 00:06:24.828 ************************************ 00:06:24.828 04:52:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.828 04:52:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.828 04:52:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.828 04:52:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 ************************************ 00:06:24.828 START TEST locking_app_on_unlocked_coremask 00:06:24.828 ************************************ 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71055 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71055 /var/tmp/spdk.sock 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71055 ']' 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.828 04:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 [2024-11-21 04:52:41.483734] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:24.828 [2024-11-21 04:52:41.483851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:06:25.087 [2024-11-21 04:52:41.654417] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.087 [2024-11-21 04:52:41.654464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.087 [2024-11-21 04:52:41.680027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71067 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71067 /var/tmp/spdk2.sock 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.684 04:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.684 [2024-11-21 04:52:42.388678] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:25.684 [2024-11-21 04:52:42.388816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:06:25.957 [2024-11-21 04:52:42.550619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.957 [2024-11-21 04:52:42.599738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.534 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.535 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.535 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71067 00:06:26.535 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71067 00:06:26.535 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71055 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71055 ']' 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71055 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71055 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.102 killing process with pid 71055 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71055' 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71055 00:06:27.102 04:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71055 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71067 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71067 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.040 killing process with pid 71067 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71067 00:06:28.040 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71067 00:06:28.299 00:06:28.299 real 0m3.469s 00:06:28.299 user 0m3.659s 00:06:28.299 sys 0m1.067s 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.299 ************************************ 00:06:28.299 END TEST locking_app_on_unlocked_coremask 00:06:28.299 ************************************ 00:06:28.299 04:52:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.299 04:52:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.299 04:52:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.299 04:52:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.299 ************************************ 00:06:28.299 START TEST locking_app_on_locked_coremask 00:06:28.299 ************************************ 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71136 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71136 /var/tmp/spdk.sock 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71136 ']' 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.299 04:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.559 [2024-11-21 04:52:45.035621] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:28.559 [2024-11-21 04:52:45.035772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71136 ] 00:06:28.559 [2024-11-21 04:52:45.211253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.559 [2024-11-21 04:52:45.236814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71154 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71154 /var/tmp/spdk2.sock 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71154 /var/tmp/spdk2.sock 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71154 /var/tmp/spdk2.sock 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71154 ']' 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.128 04:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.387 [2024-11-21 04:52:45.922293] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:29.387 [2024-11-21 04:52:45.922412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71154 ] 00:06:29.387 [2024-11-21 04:52:46.088410] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71136 has claimed it. 00:06:29.387 [2024-11-21 04:52:46.088471] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.955 ERROR: process (pid: 71154) is no longer running 00:06:29.955 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71154) - No such process 00:06:29.955 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.955 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71136 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71136 00:06:29.956 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71136 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71136 ']' 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71136 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.523 04:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71136 00:06:30.523 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.523 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.523 killing process with pid 71136 00:06:30.523 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71136' 00:06:30.523 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71136 00:06:30.523 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71136 00:06:30.782 00:06:30.782 real 0m2.459s 00:06:30.782 user 0m2.616s 00:06:30.782 sys 0m0.776s 00:06:30.782 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.782 04:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.782 ************************************ 00:06:30.782 END TEST locking_app_on_locked_coremask 00:06:30.782 ************************************ 00:06:30.782 04:52:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.782 04:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.783 04:52:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.783 04:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.783 ************************************ 00:06:30.783 START TEST locking_overlapped_coremask 00:06:30.783 ************************************ 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71196 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71196 /var/tmp/spdk.sock 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71196 ']' 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.783 04:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-11-21 04:52:47.550108] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:31.042 [2024-11-21 04:52:47.550223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71196 ] 00:06:31.042 [2024-11-21 04:52:47.719906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.042 [2024-11-21 04:52:47.750295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.042 [2024-11-21 04:52:47.750395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.042 [2024-11-21 04:52:47.750517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71214 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71214 /var/tmp/spdk2.sock 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71214 /var/tmp/spdk2.sock 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71214 /var/tmp/spdk2.sock 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71214 ']' 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.982 04:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 [2024-11-21 04:52:48.461907] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:31.982 [2024-11-21 04:52:48.462400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71214 ] 00:06:31.982 [2024-11-21 04:52:48.628079] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71196 has claimed it. 00:06:31.982 [2024-11-21 04:52:48.632175] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.553 ERROR: process (pid: 71214) is no longer running 00:06:32.553 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71214) - No such process 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71196 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71196 ']' 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71196 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71196 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.553 killing process with pid 71196 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71196' 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71196 00:06:32.553 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71196 00:06:32.813 00:06:32.813 real 0m2.047s 00:06:32.813 user 0m5.487s 00:06:32.813 sys 0m0.535s 00:06:32.813 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.813 04:52:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.813 ************************************ 00:06:32.813 END TEST locking_overlapped_coremask 00:06:32.813 ************************************ 00:06:33.073 04:52:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.073 04:52:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.073 04:52:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.073 04:52:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.073 ************************************ 00:06:33.073 START TEST locking_overlapped_coremask_via_rpc 00:06:33.073 ************************************ 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71256 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71256 /var/tmp/spdk.sock 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71256 ']' 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.073 04:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.073 [2024-11-21 04:52:49.672447] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:33.073 [2024-11-21 04:52:49.672595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71256 ] 00:06:33.334 [2024-11-21 04:52:49.847867] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.334 [2024-11-21 04:52:49.847917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.334 [2024-11-21 04:52:49.877278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.334 [2024-11-21 04:52:49.877378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.334 [2024-11-21 04:52:49.877494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71274 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71274 /var/tmp/spdk2.sock 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71274 ']' 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.904 04:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.904 [2024-11-21 04:52:50.571052] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:33.904 [2024-11-21 04:52:50.571202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:06:34.164 [2024-11-21 04:52:50.742948] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.164 [2024-11-21 04:52:50.743016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.164 [2024-11-21 04:52:50.804544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.164 [2024-11-21 04:52:50.804562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.164 [2024-11-21 04:52:50.804688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.732 [2024-11-21 04:52:51.440323] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71256 has claimed it. 00:06:34.732 request: 00:06:34.732 { 00:06:34.732 "method": "framework_enable_cpumask_locks", 00:06:34.732 "req_id": 1 00:06:34.732 } 00:06:34.732 Got JSON-RPC error response 00:06:34.732 response: 00:06:34.732 { 00:06:34.732 "code": -32603, 00:06:34.732 "message": "Failed to claim CPU core: 2" 00:06:34.732 } 00:06:34.732 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71256 /var/tmp/spdk.sock 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71256 ']' 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.733 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71274 /var/tmp/spdk2.sock 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71274 ']' 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.993 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.253 00:06:35.253 real 0m2.307s 00:06:35.253 user 0m1.093s 00:06:35.253 sys 0m0.157s 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.253 04:52:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.253 ************************************ 00:06:35.253 END TEST locking_overlapped_coremask_via_rpc 00:06:35.253 ************************************ 00:06:35.253 04:52:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:35.253 04:52:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71256 ]] 00:06:35.253 04:52:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71256 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71256 ']' 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71256 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71256 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.253 killing process with pid 71256 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71256' 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71256 00:06:35.253 04:52:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71256 00:06:35.824 04:52:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71274 ]] 00:06:35.824 04:52:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71274 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71274 ']' 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71274 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71274 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:35.824 killing process with pid 71274 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71274' 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71274 00:06:35.824 04:52:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71274 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71256 ]] 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71256 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71256 ']' 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71256 00:06:36.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71256) - No such process 00:06:36.084 Process with pid 71256 is not found 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71256 is not found' 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71274 ]] 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71274 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71274 ']' 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71274 00:06:36.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71274) - No such process 00:06:36.084 Process with pid 71274 is not found 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71274 is not found' 00:06:36.084 04:52:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.084 00:06:36.084 real 0m18.819s 00:06:36.084 user 0m31.397s 00:06:36.084 sys 0m5.885s 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.084 04:52:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.084 ************************************ 00:06:36.084 END TEST cpu_locks 00:06:36.084 ************************************ 00:06:36.344 00:06:36.344 real 0m47.012s 00:06:36.344 user 1m30.414s 00:06:36.344 sys 0m9.581s 00:06:36.344 04:52:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.344 04:52:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.344 ************************************ 00:06:36.344 END TEST event 00:06:36.345 ************************************ 00:06:36.345 04:52:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.345 04:52:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.345 04:52:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.345 04:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:36.345 ************************************ 00:06:36.345 START TEST thread 00:06:36.345 ************************************ 00:06:36.345 04:52:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.345 * Looking for test storage... 00:06:36.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:36.345 04:52:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.345 04:52:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.345 04:52:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.606 04:52:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.606 04:52:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.606 04:52:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.606 04:52:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.606 04:52:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.606 04:52:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.606 04:52:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.606 04:52:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.606 04:52:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.606 04:52:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.606 04:52:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.606 04:52:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:36.606 04:52:53 thread -- scripts/common.sh@345 -- # : 1 00:06:36.606 04:52:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.606 04:52:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.606 04:52:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:36.606 04:52:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:36.606 04:52:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.606 04:52:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:36.606 04:52:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.606 04:52:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:36.606 04:52:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:36.606 04:52:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.606 04:52:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:36.606 04:52:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.606 04:52:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.606 04:52:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.606 04:52:53 thread -- scripts/common.sh@368 -- # return 0 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.606 --rc genhtml_branch_coverage=1 00:06:36.606 --rc genhtml_function_coverage=1 00:06:36.606 --rc genhtml_legend=1 00:06:36.606 --rc geninfo_all_blocks=1 00:06:36.606 --rc geninfo_unexecuted_blocks=1 00:06:36.606 00:06:36.606 ' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.606 --rc genhtml_branch_coverage=1 00:06:36.606 --rc genhtml_function_coverage=1 00:06:36.606 --rc genhtml_legend=1 00:06:36.606 --rc geninfo_all_blocks=1 00:06:36.606 --rc geninfo_unexecuted_blocks=1 00:06:36.606 00:06:36.606 ' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.606 --rc genhtml_branch_coverage=1 00:06:36.606 --rc genhtml_function_coverage=1 00:06:36.606 --rc genhtml_legend=1 00:06:36.606 --rc geninfo_all_blocks=1 00:06:36.606 --rc geninfo_unexecuted_blocks=1 00:06:36.606 00:06:36.606 ' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.606 --rc genhtml_branch_coverage=1 00:06:36.606 --rc genhtml_function_coverage=1 00:06:36.606 --rc genhtml_legend=1 00:06:36.606 --rc geninfo_all_blocks=1 00:06:36.606 --rc geninfo_unexecuted_blocks=1 00:06:36.606 00:06:36.606 ' 00:06:36.606 04:52:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.606 04:52:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.606 ************************************ 00:06:36.606 START TEST thread_poller_perf 00:06:36.606 ************************************ 00:06:36.606 04:52:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.606 [2024-11-21 04:52:53.188009] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:36.607 [2024-11-21 04:52:53.188159] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71412 ] 00:06:36.866 [2024-11-21 04:52:53.360011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.866 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.866 [2024-11-21 04:52:53.389942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.816 [2024-11-21T04:52:54.551Z] ====================================== 00:06:37.816 [2024-11-21T04:52:54.551Z] busy:2298172788 (cyc) 00:06:37.816 [2024-11-21T04:52:54.551Z] total_run_count: 387000 00:06:37.816 [2024-11-21T04:52:54.551Z] tsc_hz: 2290000000 (cyc) 00:06:37.816 [2024-11-21T04:52:54.551Z] ====================================== 00:06:37.817 [2024-11-21T04:52:54.552Z] poller_cost: 5938 (cyc), 2593 (nsec) 00:06:37.817 00:06:37.817 real 0m1.316s 00:06:37.817 user 0m1.110s 00:06:37.817 sys 0m0.100s 00:06:37.817 04:52:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.817 04:52:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.817 ************************************ 00:06:37.817 END TEST thread_poller_perf 00:06:37.817 ************************************ 00:06:37.817 04:52:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.817 04:52:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:37.817 04:52:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.817 04:52:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.817 ************************************ 00:06:37.817 START TEST thread_poller_perf 00:06:37.817 ************************************ 00:06:37.817 04:52:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.087 [2024-11-21 04:52:54.568857] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:38.087 [2024-11-21 04:52:54.568984] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71443 ] 00:06:38.087 [2024-11-21 04:52:54.723566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.087 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.087 [2024-11-21 04:52:54.748163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.468 [2024-11-21T04:52:56.203Z] ====================================== 00:06:39.468 [2024-11-21T04:52:56.203Z] busy:2293064020 (cyc) 00:06:39.468 [2024-11-21T04:52:56.203Z] total_run_count: 5338000 00:06:39.468 [2024-11-21T04:52:56.203Z] tsc_hz: 2290000000 (cyc) 00:06:39.468 [2024-11-21T04:52:56.203Z] ====================================== 00:06:39.468 [2024-11-21T04:52:56.203Z] poller_cost: 429 (cyc), 187 (nsec) 00:06:39.468 00:06:39.468 real 0m1.289s 00:06:39.468 user 0m1.102s 00:06:39.468 sys 0m0.082s 00:06:39.468 04:52:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.468 04:52:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 ************************************ 00:06:39.468 END TEST thread_poller_perf 00:06:39.468 ************************************ 00:06:39.468 04:52:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:39.468 00:06:39.468 real 0m2.963s 00:06:39.468 user 0m2.373s 00:06:39.468 sys 0m0.393s 00:06:39.468 04:52:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.468 04:52:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 ************************************ 00:06:39.468 END TEST thread 00:06:39.468 ************************************ 00:06:39.468 04:52:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:39.468 04:52:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.468 04:52:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.468 04:52:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.468 04:52:55 -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 ************************************ 00:06:39.468 START TEST app_cmdline 00:06:39.468 ************************************ 00:06:39.468 04:52:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.468 * Looking for test storage... 00:06:39.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.468 04:52:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.468 --rc genhtml_branch_coverage=1 00:06:39.468 --rc genhtml_function_coverage=1 00:06:39.468 --rc genhtml_legend=1 00:06:39.468 --rc geninfo_all_blocks=1 00:06:39.468 --rc geninfo_unexecuted_blocks=1 00:06:39.468 00:06:39.468 ' 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.468 --rc genhtml_branch_coverage=1 00:06:39.468 --rc genhtml_function_coverage=1 00:06:39.468 --rc genhtml_legend=1 00:06:39.468 --rc geninfo_all_blocks=1 00:06:39.468 --rc geninfo_unexecuted_blocks=1 00:06:39.468 00:06:39.468 ' 00:06:39.468 04:52:56 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.468 --rc genhtml_branch_coverage=1 00:06:39.468 --rc genhtml_function_coverage=1 00:06:39.468 --rc genhtml_legend=1 00:06:39.468 --rc geninfo_all_blocks=1 00:06:39.468 --rc geninfo_unexecuted_blocks=1 00:06:39.469 00:06:39.469 ' 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.469 --rc genhtml_branch_coverage=1 00:06:39.469 --rc genhtml_function_coverage=1 00:06:39.469 --rc genhtml_legend=1 00:06:39.469 --rc geninfo_all_blocks=1 00:06:39.469 --rc geninfo_unexecuted_blocks=1 00:06:39.469 00:06:39.469 ' 00:06:39.469 04:52:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.469 04:52:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71527 00:06:39.469 04:52:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.469 04:52:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71527 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71527 ']' 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.469 04:52:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 [2024-11-21 04:52:56.235358] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:39.728 [2024-11-21 04:52:56.235489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71527 ] 00:06:39.728 [2024-11-21 04:52:56.408600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.728 [2024-11-21 04:52:56.434776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:40.668 { 00:06:40.668 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:40.668 "fields": { 00:06:40.668 "major": 25, 00:06:40.668 "minor": 1, 00:06:40.668 "patch": 0, 00:06:40.668 "suffix": "-pre", 00:06:40.668 "commit": "557f022f6" 00:06:40.668 } 00:06:40.668 } 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.668 04:52:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:40.668 04:52:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.928 request: 00:06:40.928 { 00:06:40.928 "method": "env_dpdk_get_mem_stats", 00:06:40.928 "req_id": 1 00:06:40.928 } 00:06:40.928 Got JSON-RPC error response 00:06:40.928 response: 00:06:40.928 { 00:06:40.928 "code": -32601, 00:06:40.928 "message": "Method not found" 00:06:40.928 } 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.928 04:52:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71527 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71527 ']' 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71527 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71527 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.928 killing process with pid 71527 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71527' 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 71527 00:06:40.928 04:52:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 71527 00:06:41.188 00:06:41.188 real 0m1.967s 00:06:41.188 user 0m2.173s 00:06:41.188 sys 0m0.566s 00:06:41.188 04:52:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.188 04:52:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 ************************************ 00:06:41.188 END TEST app_cmdline 00:06:41.188 ************************************ 00:06:41.449 04:52:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.449 04:52:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.449 04:52:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.449 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.449 ************************************ 00:06:41.449 START TEST version 00:06:41.449 ************************************ 00:06:41.449 04:52:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.449 * Looking for test storage... 00:06:41.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.449 04:52:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.449 04:52:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.449 04:52:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.449 04:52:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.449 04:52:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.449 04:52:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.449 04:52:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.449 04:52:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.449 04:52:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.449 04:52:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.449 04:52:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.449 04:52:58 version -- scripts/common.sh@344 -- # case "$op" in 00:06:41.449 04:52:58 version -- scripts/common.sh@345 -- # : 1 00:06:41.449 04:52:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.449 04:52:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.449 04:52:58 version -- scripts/common.sh@365 -- # decimal 1 00:06:41.449 04:52:58 version -- scripts/common.sh@353 -- # local d=1 00:06:41.449 04:52:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.449 04:52:58 version -- scripts/common.sh@355 -- # echo 1 00:06:41.449 04:52:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.449 04:52:58 version -- scripts/common.sh@366 -- # decimal 2 00:06:41.449 04:52:58 version -- scripts/common.sh@353 -- # local d=2 00:06:41.449 04:52:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.449 04:52:58 version -- scripts/common.sh@355 -- # echo 2 00:06:41.449 04:52:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.449 04:52:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.449 04:52:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.449 04:52:58 version -- scripts/common.sh@368 -- # return 0 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.449 04:52:58 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.449 --rc genhtml_branch_coverage=1 00:06:41.449 --rc genhtml_function_coverage=1 00:06:41.449 --rc genhtml_legend=1 00:06:41.449 --rc geninfo_all_blocks=1 00:06:41.449 --rc geninfo_unexecuted_blocks=1 00:06:41.449 00:06:41.449 ' 00:06:41.450 04:52:58 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.450 --rc genhtml_branch_coverage=1 00:06:41.450 --rc genhtml_function_coverage=1 00:06:41.450 --rc genhtml_legend=1 00:06:41.450 --rc geninfo_all_blocks=1 00:06:41.450 --rc geninfo_unexecuted_blocks=1 00:06:41.450 00:06:41.450 ' 00:06:41.450 04:52:58 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.450 --rc genhtml_branch_coverage=1 00:06:41.450 --rc genhtml_function_coverage=1 00:06:41.450 --rc genhtml_legend=1 00:06:41.450 --rc geninfo_all_blocks=1 00:06:41.450 --rc geninfo_unexecuted_blocks=1 00:06:41.450 00:06:41.450 ' 00:06:41.450 04:52:58 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.450 --rc genhtml_branch_coverage=1 00:06:41.450 --rc genhtml_function_coverage=1 00:06:41.450 --rc genhtml_legend=1 00:06:41.450 --rc geninfo_all_blocks=1 00:06:41.450 --rc geninfo_unexecuted_blocks=1 00:06:41.450 00:06:41.450 ' 00:06:41.450 04:52:58 version -- app/version.sh@17 -- # get_header_version major 00:06:41.450 04:52:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.450 04:52:58 version -- app/version.sh@14 -- # cut -f2 00:06:41.450 04:52:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.450 04:52:58 version -- app/version.sh@17 -- # major=25 00:06:41.450 04:52:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.450 04:52:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.450 04:52:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.450 04:52:58 version -- app/version.sh@14 -- # cut -f2 00:06:41.710 04:52:58 version -- app/version.sh@18 -- # minor=1 00:06:41.710 04:52:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.710 04:52:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.710 04:52:58 version -- app/version.sh@14 -- # cut -f2 00:06:41.710 04:52:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.710 04:52:58 version -- app/version.sh@19 -- # patch=0 00:06:41.710 04:52:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.710 04:52:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.710 04:52:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.710 04:52:58 version -- app/version.sh@14 -- # cut -f2 00:06:41.710 04:52:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.710 04:52:58 version -- app/version.sh@22 -- # version=25.1 00:06:41.710 04:52:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.710 04:52:58 version -- app/version.sh@28 -- # version=25.1rc0 00:06:41.710 04:52:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.710 04:52:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.710 04:52:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:41.710 04:52:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:41.710 ************************************ 00:06:41.710 END TEST version 00:06:41.710 ************************************ 00:06:41.710 00:06:41.710 real 0m0.310s 00:06:41.710 user 0m0.193s 00:06:41.710 sys 0m0.171s 00:06:41.710 04:52:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.710 04:52:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 04:52:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:41.710 04:52:58 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:41.710 04:52:58 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:41.710 04:52:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.710 04:52:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.710 04:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 ************************************ 00:06:41.710 START TEST bdev_raid 00:06:41.710 ************************************ 00:06:41.710 04:52:58 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:41.710 * Looking for test storage... 00:06:41.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:41.710 04:52:58 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.710 04:52:58 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.970 04:52:58 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.970 --rc genhtml_branch_coverage=1 00:06:41.970 --rc genhtml_function_coverage=1 00:06:41.970 --rc genhtml_legend=1 00:06:41.970 --rc geninfo_all_blocks=1 00:06:41.970 --rc geninfo_unexecuted_blocks=1 00:06:41.970 00:06:41.970 ' 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.970 --rc genhtml_branch_coverage=1 00:06:41.970 --rc genhtml_function_coverage=1 00:06:41.970 --rc genhtml_legend=1 00:06:41.970 --rc geninfo_all_blocks=1 00:06:41.970 --rc geninfo_unexecuted_blocks=1 00:06:41.970 00:06:41.970 ' 00:06:41.970 04:52:58 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.970 --rc genhtml_branch_coverage=1 00:06:41.970 --rc genhtml_function_coverage=1 00:06:41.970 --rc genhtml_legend=1 00:06:41.970 --rc geninfo_all_blocks=1 00:06:41.970 --rc geninfo_unexecuted_blocks=1 00:06:41.970 00:06:41.971 ' 00:06:41.971 04:52:58 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.971 --rc genhtml_branch_coverage=1 00:06:41.971 --rc genhtml_function_coverage=1 00:06:41.971 --rc genhtml_legend=1 00:06:41.971 --rc geninfo_all_blocks=1 00:06:41.971 --rc geninfo_unexecuted_blocks=1 00:06:41.971 00:06:41.971 ' 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.971 04:52:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:41.971 04:52:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:41.971 04:52:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.971 04:52:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.971 04:52:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.971 ************************************ 00:06:41.971 START TEST raid1_resize_data_offset_test 00:06:41.971 ************************************ 00:06:41.971 Process raid pid: 71694 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71694 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71694' 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71694 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 71694 ']' 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.971 04:52:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.971 [2024-11-21 04:52:58.656372] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:41.971 [2024-11-21 04:52:58.657173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.231 [2024-11-21 04:52:58.834944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.231 [2024-11-21 04:52:58.875578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.231 [2024-11-21 04:52:58.951601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.231 [2024-11-21 04:52:58.951771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.800 malloc0 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.800 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.060 malloc1 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.060 null0 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.060 [2024-11-21 04:52:59.584118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:43.060 [2024-11-21 04:52:59.586317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:43.060 [2024-11-21 04:52:59.586360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:43.060 [2024-11-21 04:52:59.586506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:43.060 [2024-11-21 04:52:59.586519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:43.060 [2024-11-21 04:52:59.586780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:43.060 [2024-11-21 04:52:59.586931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:43.060 [2024-11-21 04:52:59.586944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:43.060 [2024-11-21 04:52:59.587078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.060 [2024-11-21 04:52:59.648008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.060 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.319 malloc2 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.319 [2024-11-21 04:52:59.857757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:43.319 [2024-11-21 04:52:59.866790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.319 [2024-11-21 04:52:59.869126] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71694 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 71694 ']' 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 71694 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71694 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71694' 00:06:43.319 killing process with pid 71694 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 71694 00:06:43.319 [2024-11-21 04:52:59.962212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.319 04:52:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 71694 00:06:43.319 [2024-11-21 04:52:59.963101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:43.319 [2024-11-21 04:52:59.963182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.319 [2024-11-21 04:52:59.963205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:43.319 [2024-11-21 04:52:59.972335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.320 [2024-11-21 04:52:59.972672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.320 [2024-11-21 04:52:59.972688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:43.887 [2024-11-21 04:53:00.365852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.147 04:53:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:44.147 00:06:44.147 real 0m2.113s 00:06:44.147 user 0m1.963s 00:06:44.147 sys 0m0.608s 00:06:44.147 ************************************ 00:06:44.147 END TEST raid1_resize_data_offset_test 00:06:44.147 ************************************ 00:06:44.147 04:53:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.147 04:53:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.147 04:53:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:44.147 04:53:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.147 04:53:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.147 04:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.147 ************************************ 00:06:44.147 START TEST raid0_resize_superblock_test 00:06:44.147 ************************************ 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:44.147 Process raid pid: 71750 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71750 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71750' 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71750 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71750 ']' 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.147 04:53:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.147 [2024-11-21 04:53:00.833125] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:44.147 [2024-11-21 04:53:00.833300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.407 [2024-11-21 04:53:01.004742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.407 [2024-11-21 04:53:01.047421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.407 [2024-11-21 04:53:01.123550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.407 [2024-11-21 04:53:01.123698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.975 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.976 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:44.976 04:53:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:44.976 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.976 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.235 malloc0 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.235 [2024-11-21 04:53:01.879298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:45.235 [2024-11-21 04:53:01.879470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.235 [2024-11-21 04:53:01.879504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:45.235 [2024-11-21 04:53:01.879517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.235 [2024-11-21 04:53:01.882094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.235 [2024-11-21 04:53:01.882147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:45.235 pt0 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.235 04:53:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.495 0361df43-606e-4010-b43c-bda69116334b 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.495 86b61fcc-c3be-456d-9c6b-e77b73cb2d3d 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:45.495 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 ccabd023-732a-4c7a-b828-fc43e16703c4 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 [2024-11-21 04:53:02.089725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86b61fcc-c3be-456d-9c6b-e77b73cb2d3d is claimed 00:06:45.496 [2024-11-21 04:53:02.089829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ccabd023-732a-4c7a-b828-fc43e16703c4 is claimed 00:06:45.496 [2024-11-21 04:53:02.089943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:45.496 [2024-11-21 04:53:02.089957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:45.496 [2024-11-21 04:53:02.090283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:45.496 [2024-11-21 04:53:02.090468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:45.496 [2024-11-21 04:53:02.090481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:45.496 [2024-11-21 04:53:02.090618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 [2024-11-21 04:53:02.201699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.496 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.756 [2024-11-21 04:53:02.229601] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:45.756 [2024-11-21 04:53:02.229628] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '86b61fcc-c3be-456d-9c6b-e77b73cb2d3d' was resized: old size 131072, new size 204800 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.756 [2024-11-21 04:53:02.241492] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:45.756 [2024-11-21 04:53:02.241517] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ccabd023-732a-4c7a-b828-fc43e16703c4' was resized: old size 131072, new size 204800 00:06:45.756 [2024-11-21 04:53:02.241545] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.756 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.756 [2024-11-21 04:53:02.353428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 [2024-11-21 04:53:02.377193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:45.757 [2024-11-21 04:53:02.377329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:45.757 [2024-11-21 04:53:02.377361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.757 [2024-11-21 04:53:02.377413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:45.757 [2024-11-21 04:53:02.377587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.757 [2024-11-21 04:53:02.377659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.757 [2024-11-21 04:53:02.377719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 [2024-11-21 04:53:02.389137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:45.757 [2024-11-21 04:53:02.389249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.757 [2024-11-21 04:53:02.389293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:45.757 [2024-11-21 04:53:02.389329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.757 [2024-11-21 04:53:02.391809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.757 [2024-11-21 04:53:02.391900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:45.757 [2024-11-21 04:53:02.393579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 86b61fcc-c3be-456d-9c6b-e77b73cb2d3d 00:06:45.757 [2024-11-21 04:53:02.393687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86b61fcc-c3be-456d-9c6b-e77b73cb2d3d is claimed 00:06:45.757 [2024-11-21 04:53:02.393837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ccabd023-732a-4c7a-b828-fc43e16703c4 00:06:45.757 [2024-11-21 04:53:02.393923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ccabd023-732a-4c7a-b828-fc43e16703c4 is claimed 00:06:45.757 [2024-11-21 04:53:02.394121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ccabd023-732a-4c7a-b828-fc43e16703c4 (2) smaller than existing raid bdev Raid (3) 00:06:45.757 [2024-11-21 04:53:02.394194] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 86b61fcc-c3be-456d-9c6b-e77b73cb2d3d: File exists 00:06:45.757 [2024-11-21 04:53:02.394329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:45.757 [2024-11-21 04:53:02.394364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:45.757 [2024-11-21 04:53:02.394657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:45.757 pt0 00:06:45.757 [2024-11-21 04:53:02.394864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:45.757 [2024-11-21 04:53:02.394906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:45.757 [2024-11-21 04:53:02.395104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:45.757 [2024-11-21 04:53:02.413874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71750 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71750 ']' 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71750 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.757 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71750 00:06:46.017 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.017 killing process with pid 71750 00:06:46.017 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.017 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71750' 00:06:46.017 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71750 00:06:46.017 [2024-11-21 04:53:02.501381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.017 [2024-11-21 04:53:02.501472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.017 [2024-11-21 04:53:02.501520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.017 [2024-11-21 04:53:02.501531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:46.017 04:53:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71750 00:06:46.277 [2024-11-21 04:53:02.808094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.537 04:53:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:46.537 00:06:46.537 real 0m2.383s 00:06:46.537 user 0m2.448s 00:06:46.537 sys 0m0.674s 00:06:46.537 04:53:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.537 04:53:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.537 ************************************ 00:06:46.537 END TEST raid0_resize_superblock_test 00:06:46.537 ************************************ 00:06:46.537 04:53:03 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:46.537 04:53:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.537 04:53:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.537 04:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.537 ************************************ 00:06:46.537 START TEST raid1_resize_superblock_test 00:06:46.537 ************************************ 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71821 00:06:46.537 Process raid pid: 71821 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71821' 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71821 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71821 ']' 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.537 04:53:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.797 [2024-11-21 04:53:03.297189] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:46.797 [2024-11-21 04:53:03.297417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.797 [2024-11-21 04:53:03.463166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.797 [2024-11-21 04:53:03.503383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.057 [2024-11-21 04:53:03.580444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.057 [2024-11-21 04:53:03.580553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.627 malloc0 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.627 [2024-11-21 04:53:04.321805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.627 [2024-11-21 04:53:04.321894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.627 [2024-11-21 04:53:04.321927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:47.627 [2024-11-21 04:53:04.321948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.627 [2024-11-21 04:53:04.324742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.627 [2024-11-21 04:53:04.324794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.627 pt0 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.627 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 c8ae4925-fd44-4dd6-94b7-e746762d1ba8 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 99cd6565-2971-47fa-a760-bd8f2c724ddb 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 f4277dcb-3062-4800-b922-8d5cdb4142b6 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 [2024-11-21 04:53:04.530676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 99cd6565-2971-47fa-a760-bd8f2c724ddb is claimed 00:06:47.887 [2024-11-21 04:53:04.530883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f4277dcb-3062-4800-b922-8d5cdb4142b6 is claimed 00:06:47.887 [2024-11-21 04:53:04.531036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:47.887 [2024-11-21 04:53:04.531052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:47.887 [2024-11-21 04:53:04.531371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:47.887 [2024-11-21 04:53:04.531561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:47.887 [2024-11-21 04:53:04.531576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:47.887 [2024-11-21 04:53:04.531710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.887 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.154 [2024-11-21 04:53:04.638754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.154 [2024-11-21 04:53:04.682494] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:48.154 [2024-11-21 04:53:04.682569] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '99cd6565-2971-47fa-a760-bd8f2c724ddb' was resized: old size 131072, new size 204800 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.154 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 [2024-11-21 04:53:04.694429] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:48.155 [2024-11-21 04:53:04.694491] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f4277dcb-3062-4800-b922-8d5cdb4142b6' was resized: old size 131072, new size 204800 00:06:48.155 [2024-11-21 04:53:04.694519] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 [2024-11-21 04:53:04.806368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 [2024-11-21 04:53:04.834145] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:48.155 [2024-11-21 04:53:04.834211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:48.155 [2024-11-21 04:53:04.834248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:48.155 [2024-11-21 04:53:04.834422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:48.155 [2024-11-21 04:53:04.834614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.155 [2024-11-21 04:53:04.834667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.155 [2024-11-21 04:53:04.834706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 [2024-11-21 04:53:04.846097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:48.155 [2024-11-21 04:53:04.846148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.155 [2024-11-21 04:53:04.846168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:48.155 [2024-11-21 04:53:04.846179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.155 [2024-11-21 04:53:04.848618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.155 [2024-11-21 04:53:04.848657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:48.155 [2024-11-21 04:53:04.850134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 99cd6565-2971-47fa-a760-bd8f2c724ddb 00:06:48.155 [2024-11-21 04:53:04.850210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 99cd6565-2971-47fa-a760-bd8f2c724ddb is claimed 00:06:48.155 [2024-11-21 04:53:04.850283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f4277dcb-3062-4800-b922-8d5cdb4142b6 00:06:48.155 [2024-11-21 04:53:04.850305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f4277dcb-3062-4800-b922-8d5cdb4142b6 is claimed 00:06:48.155 [2024-11-21 04:53:04.850445] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f4277dcb-3062-4800-b922-8d5cdb4142b6 (2) smaller than existing raid bdev Raid (3) 00:06:48.155 [2024-11-21 04:53:04.850474] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 99cd6565-2971-47fa-a760-bd8f2c724ddb: File exists 00:06:48.155 [2024-11-21 04:53:04.850534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:48.155 [2024-11-21 04:53:04.850544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:48.155 pt0 00:06:48.155 [2024-11-21 04:53:04.850811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:48.155 [2024-11-21 04:53:04.850955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:48.155 [2024-11-21 04:53:04.850964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:48.155 [2024-11-21 04:53:04.851072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.155 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 [2024-11-21 04:53:04.874393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71821 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71821 ']' 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71821 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71821 00:06:48.425 killing process with pid 71821 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71821' 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71821 00:06:48.425 [2024-11-21 04:53:04.957173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.425 [2024-11-21 04:53:04.957238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.425 [2024-11-21 04:53:04.957278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.425 [2024-11-21 04:53:04.957286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:48.425 04:53:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71821 00:06:48.686 [2024-11-21 04:53:05.260056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.946 ************************************ 00:06:48.946 END TEST raid1_resize_superblock_test 00:06:48.946 ************************************ 00:06:48.946 04:53:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:48.946 00:06:48.946 real 0m2.376s 00:06:48.946 user 0m2.468s 00:06:48.946 sys 0m0.672s 00:06:48.946 04:53:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.946 04:53:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:48.946 04:53:05 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:48.946 04:53:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.946 04:53:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.946 04:53:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.946 ************************************ 00:06:48.946 START TEST raid_function_test_raid0 00:06:48.946 ************************************ 00:06:48.946 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:48.946 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:48.946 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:48.946 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71903 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.206 Process raid pid: 71903 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71903' 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71903 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 71903 ']' 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.206 04:53:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.206 [2024-11-21 04:53:05.759629] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:49.206 [2024-11-21 04:53:05.759755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.206 [2024-11-21 04:53:05.932403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.466 [2024-11-21 04:53:05.972520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.466 [2024-11-21 04:53:06.048623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.466 [2024-11-21 04:53:06.048667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:50.035 Base_1 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:50.035 Base_2 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.035 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:50.035 [2024-11-21 04:53:06.649402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:50.035 [2024-11-21 04:53:06.651657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:50.036 [2024-11-21 04:53:06.651766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:50.036 [2024-11-21 04:53:06.651815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:50.036 [2024-11-21 04:53:06.652178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:50.036 [2024-11-21 04:53:06.652377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:50.036 [2024-11-21 04:53:06.652420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:50.036 [2024-11-21 04:53:06.652635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.036 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:50.295 [2024-11-21 04:53:06.881109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.295 /dev/nbd0 00:06:50.295 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.296 1+0 records in 00:06:50.296 1+0 records out 00:06:50.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424711 s, 9.6 MB/s 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.296 04:53:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.556 { 00:06:50.556 "nbd_device": "/dev/nbd0", 00:06:50.556 "bdev_name": "raid" 00:06:50.556 } 00:06:50.556 ]' 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.556 { 00:06:50.556 "nbd_device": "/dev/nbd0", 00:06:50.556 "bdev_name": "raid" 00:06:50.556 } 00:06:50.556 ]' 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:50.556 4096+0 records in 00:06:50.556 4096+0 records out 00:06:50.556 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0332353 s, 63.1 MB/s 00:06:50.556 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:50.816 4096+0 records in 00:06:50.816 4096+0 records out 00:06:50.816 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.209433 s, 10.0 MB/s 00:06:50.816 04:53:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.754 128+0 records in 00:06:51.754 128+0 records out 00:06:51.754 65536 bytes (66 kB, 64 KiB) copied, 0.00190433 s, 34.4 MB/s 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.754 2035+0 records in 00:06:51.754 2035+0 records out 00:06:51.754 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0140241 s, 74.3 MB/s 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.754 456+0 records in 00:06:51.754 456+0 records out 00:06:51.754 233472 bytes (233 kB, 228 KiB) copied, 0.00390008 s, 59.9 MB/s 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.754 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.013 [2024-11-21 04:53:08.689658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.013 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71903 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 71903 ']' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 71903 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.273 04:53:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71903 00:06:52.532 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.532 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.532 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71903' 00:06:52.532 killing process with pid 71903 00:06:52.532 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 71903 00:06:52.532 [2024-11-21 04:53:09.033351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.532 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 71903 00:06:52.532 [2024-11-21 04:53:09.033548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.532 [2024-11-21 04:53:09.033620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.532 [2024-11-21 04:53:09.033638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:52.532 [2024-11-21 04:53:09.077457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.791 04:53:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:52.791 00:06:52.791 real 0m3.732s 00:06:52.791 user 0m3.981s 00:06:52.791 sys 0m1.325s 00:06:52.791 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.791 04:53:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.791 ************************************ 00:06:52.791 END TEST raid_function_test_raid0 00:06:52.791 ************************************ 00:06:52.791 04:53:09 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:52.791 04:53:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.791 04:53:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.791 04:53:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.791 ************************************ 00:06:52.791 START TEST raid_function_test_concat 00:06:52.791 ************************************ 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72031 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.791 Process raid pid: 72031 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72031' 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72031 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 72031 ']' 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.791 04:53:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.050 [2024-11-21 04:53:09.566600] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:53.051 [2024-11-21 04:53:09.566787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.051 [2024-11-21 04:53:09.753084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.310 [2024-11-21 04:53:09.794169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.310 [2024-11-21 04:53:09.870596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.310 [2024-11-21 04:53:09.870654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.881 Base_1 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.881 Base_2 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.881 [2024-11-21 04:53:10.439227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:53.881 [2024-11-21 04:53:10.441394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:53.881 [2024-11-21 04:53:10.441459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:53.881 [2024-11-21 04:53:10.441477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:53.881 [2024-11-21 04:53:10.441753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:53.881 [2024-11-21 04:53:10.441934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:53.881 [2024-11-21 04:53:10.441952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:53.881 [2024-11-21 04:53:10.442142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.881 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:54.141 [2024-11-21 04:53:10.682913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:54.141 /dev/nbd0 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.141 1+0 records in 00:06:54.141 1+0 records out 00:06:54.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294318 s, 13.9 MB/s 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.141 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:54.401 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.401 { 00:06:54.401 "nbd_device": "/dev/nbd0", 00:06:54.401 "bdev_name": "raid" 00:06:54.401 } 00:06:54.401 ]' 00:06:54.401 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.401 { 00:06:54.401 "nbd_device": "/dev/nbd0", 00:06:54.401 "bdev_name": "raid" 00:06:54.401 } 00:06:54.401 ]' 00:06:54.401 04:53:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:54.401 4096+0 records in 00:06:54.401 4096+0 records out 00:06:54.401 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0321705 s, 65.2 MB/s 00:06:54.401 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:54.662 4096+0 records in 00:06:54.662 4096+0 records out 00:06:54.662 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.20854 s, 10.1 MB/s 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:54.662 128+0 records in 00:06:54.662 128+0 records out 00:06:54.662 65536 bytes (66 kB, 64 KiB) copied, 0.00106354 s, 61.6 MB/s 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:54.662 2035+0 records in 00:06:54.662 2035+0 records out 00:06:54.662 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0141057 s, 73.9 MB/s 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:54.662 456+0 records in 00:06:54.662 456+0 records out 00:06:54.662 233472 bytes (233 kB, 228 KiB) copied, 0.00321058 s, 72.7 MB/s 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.662 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.924 [2024-11-21 04:53:11.603847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.924 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72031 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 72031 ']' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 72031 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.183 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72031 00:06:55.442 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.442 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.442 killing process with pid 72031 00:06:55.443 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72031' 00:06:55.443 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 72031 00:06:55.443 [2024-11-21 04:53:11.917969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.443 04:53:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 72031 00:06:55.443 [2024-11-21 04:53:11.918140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.443 [2024-11-21 04:53:11.918245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.443 [2024-11-21 04:53:11.918266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:55.443 [2024-11-21 04:53:11.958786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.702 04:53:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:55.702 00:06:55.702 real 0m2.805s 00:06:55.702 user 0m3.353s 00:06:55.702 sys 0m0.979s 00:06:55.702 04:53:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.702 04:53:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:55.702 ************************************ 00:06:55.702 END TEST raid_function_test_concat 00:06:55.702 ************************************ 00:06:55.702 04:53:12 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:55.702 04:53:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.702 04:53:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.702 04:53:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.702 ************************************ 00:06:55.702 START TEST raid0_resize_test 00:06:55.702 ************************************ 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72142 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72142' 00:06:55.702 Process raid pid: 72142 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72142 00:06:55.702 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 72142 ']' 00:06:55.703 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.703 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.703 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.703 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.703 04:53:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.703 [2024-11-21 04:53:12.431736] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:55.703 [2024-11-21 04:53:12.432238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.963 [2024-11-21 04:53:12.605029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.963 [2024-11-21 04:53:12.647119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.222 [2024-11-21 04:53:12.723899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.222 [2024-11-21 04:53:12.723944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.792 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 Base_1 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 Base_2 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 [2024-11-21 04:53:13.291169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.793 [2024-11-21 04:53:13.293473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.793 [2024-11-21 04:53:13.293546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:56.793 [2024-11-21 04:53:13.293558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.793 [2024-11-21 04:53:13.293961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:56.793 [2024-11-21 04:53:13.294138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:56.793 [2024-11-21 04:53:13.294155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:56.793 [2024-11-21 04:53:13.294356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 [2024-11-21 04:53:13.303082] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.793 [2024-11-21 04:53:13.303125] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:56.793 true 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 [2024-11-21 04:53:13.319236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 [2024-11-21 04:53:13.367050] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.793 [2024-11-21 04:53:13.367110] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:56.793 [2024-11-21 04:53:13.367141] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:56.793 true 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:56.793 [2024-11-21 04:53:13.379177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72142 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 72142 ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 72142 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72142 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.793 killing process with pid 72142 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72142' 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 72142 00:06:56.793 [2024-11-21 04:53:13.469928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.793 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 72142 00:06:56.793 [2024-11-21 04:53:13.470070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.793 [2024-11-21 04:53:13.470175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.793 [2024-11-21 04:53:13.470196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:56.793 [2024-11-21 04:53:13.472525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.363 04:53:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:57.363 00:06:57.363 real 0m1.449s 00:06:57.363 user 0m1.567s 00:06:57.363 sys 0m0.356s 00:06:57.363 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.363 04:53:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.363 ************************************ 00:06:57.363 END TEST raid0_resize_test 00:06:57.363 ************************************ 00:06:57.363 04:53:13 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:57.363 04:53:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.363 04:53:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.363 04:53:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.363 ************************************ 00:06:57.363 START TEST raid1_resize_test 00:06:57.363 ************************************ 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72193 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.363 Process raid pid: 72193 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72193' 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72193 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 72193 ']' 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.363 04:53:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.363 [2024-11-21 04:53:13.951776] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:57.363 [2024-11-21 04:53:13.951891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.621 [2024-11-21 04:53:14.124437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.621 [2024-11-21 04:53:14.169733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.621 [2024-11-21 04:53:14.246582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.621 [2024-11-21 04:53:14.246626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 Base_1 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 Base_2 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 [2024-11-21 04:53:14.809609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:58.191 [2024-11-21 04:53:14.811778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:58.191 [2024-11-21 04:53:14.811834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:58.191 [2024-11-21 04:53:14.811845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:58.191 [2024-11-21 04:53:14.812113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:58.191 [2024-11-21 04:53:14.812256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:58.191 [2024-11-21 04:53:14.812277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:58.191 [2024-11-21 04:53:14.812416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 [2024-11-21 04:53:14.821579] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.191 [2024-11-21 04:53:14.821606] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:58.191 true 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 [2024-11-21 04:53:14.837708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 [2024-11-21 04:53:14.885439] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.191 [2024-11-21 04:53:14.885462] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:58.191 [2024-11-21 04:53:14.885482] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:58.191 true 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:58.191 [2024-11-21 04:53:14.897610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.191 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72193 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 72193 ']' 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 72193 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72193 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.450 killing process with pid 72193 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72193' 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 72193 00:06:58.450 [2024-11-21 04:53:14.986224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.450 [2024-11-21 04:53:14.986308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.450 04:53:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 72193 00:06:58.450 [2024-11-21 04:53:14.986783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.450 [2024-11-21 04:53:14.986807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:58.450 [2024-11-21 04:53:14.988599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.710 04:53:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:58.710 00:06:58.710 real 0m1.443s 00:06:58.710 user 0m1.540s 00:06:58.710 sys 0m0.371s 00:06:58.710 04:53:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.710 04:53:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.710 ************************************ 00:06:58.710 END TEST raid1_resize_test 00:06:58.710 ************************************ 00:06:58.710 04:53:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:58.710 04:53:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:58.710 04:53:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:58.710 04:53:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.710 04:53:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.710 04:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.710 ************************************ 00:06:58.710 START TEST raid_state_function_test 00:06:58.710 ************************************ 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:58.710 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72244 00:06:58.711 Process raid pid: 72244 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72244' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72244 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72244 ']' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.711 04:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.970 [2024-11-21 04:53:15.474597] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:58.970 [2024-11-21 04:53:15.474729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.970 [2024-11-21 04:53:15.642921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.970 [2024-11-21 04:53:15.683398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.230 [2024-11-21 04:53:15.759509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.230 [2024-11-21 04:53:15.759549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.799 [2024-11-21 04:53:16.298554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.799 [2024-11-21 04:53:16.298608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.799 [2024-11-21 04:53:16.298618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.799 [2024-11-21 04:53:16.298629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.799 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.800 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.800 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.800 "name": "Existed_Raid", 00:06:59.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.800 "strip_size_kb": 64, 00:06:59.800 "state": "configuring", 00:06:59.800 "raid_level": "raid0", 00:06:59.800 "superblock": false, 00:06:59.800 "num_base_bdevs": 2, 00:06:59.800 "num_base_bdevs_discovered": 0, 00:06:59.800 "num_base_bdevs_operational": 2, 00:06:59.800 "base_bdevs_list": [ 00:06:59.800 { 00:06:59.800 "name": "BaseBdev1", 00:06:59.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.800 "is_configured": false, 00:06:59.800 "data_offset": 0, 00:06:59.800 "data_size": 0 00:06:59.800 }, 00:06:59.800 { 00:06:59.800 "name": "BaseBdev2", 00:06:59.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.800 "is_configured": false, 00:06:59.800 "data_offset": 0, 00:06:59.800 "data_size": 0 00:06:59.800 } 00:06:59.800 ] 00:06:59.800 }' 00:06:59.800 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.800 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.059 [2024-11-21 04:53:16.777619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.059 [2024-11-21 04:53:16.777665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.059 [2024-11-21 04:53:16.785611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.059 [2024-11-21 04:53:16.785647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.059 [2024-11-21 04:53:16.785655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.059 [2024-11-21 04:53:16.785664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.059 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.319 [2024-11-21 04:53:16.808619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.319 BaseBdev1 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.319 [ 00:07:00.319 { 00:07:00.319 "name": "BaseBdev1", 00:07:00.319 "aliases": [ 00:07:00.319 "428262ad-62f3-4c42-8d7a-0a05b4afaa9d" 00:07:00.319 ], 00:07:00.319 "product_name": "Malloc disk", 00:07:00.319 "block_size": 512, 00:07:00.319 "num_blocks": 65536, 00:07:00.319 "uuid": "428262ad-62f3-4c42-8d7a-0a05b4afaa9d", 00:07:00.319 "assigned_rate_limits": { 00:07:00.319 "rw_ios_per_sec": 0, 00:07:00.319 "rw_mbytes_per_sec": 0, 00:07:00.319 "r_mbytes_per_sec": 0, 00:07:00.319 "w_mbytes_per_sec": 0 00:07:00.319 }, 00:07:00.319 "claimed": true, 00:07:00.319 "claim_type": "exclusive_write", 00:07:00.319 "zoned": false, 00:07:00.319 "supported_io_types": { 00:07:00.319 "read": true, 00:07:00.319 "write": true, 00:07:00.319 "unmap": true, 00:07:00.319 "flush": true, 00:07:00.319 "reset": true, 00:07:00.319 "nvme_admin": false, 00:07:00.319 "nvme_io": false, 00:07:00.319 "nvme_io_md": false, 00:07:00.319 "write_zeroes": true, 00:07:00.319 "zcopy": true, 00:07:00.319 "get_zone_info": false, 00:07:00.319 "zone_management": false, 00:07:00.319 "zone_append": false, 00:07:00.319 "compare": false, 00:07:00.319 "compare_and_write": false, 00:07:00.319 "abort": true, 00:07:00.319 "seek_hole": false, 00:07:00.319 "seek_data": false, 00:07:00.319 "copy": true, 00:07:00.319 "nvme_iov_md": false 00:07:00.319 }, 00:07:00.319 "memory_domains": [ 00:07:00.319 { 00:07:00.319 "dma_device_id": "system", 00:07:00.319 "dma_device_type": 1 00:07:00.319 }, 00:07:00.319 { 00:07:00.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.319 "dma_device_type": 2 00:07:00.319 } 00:07:00.319 ], 00:07:00.319 "driver_specific": {} 00:07:00.319 } 00:07:00.319 ] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.319 "name": "Existed_Raid", 00:07:00.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.319 "strip_size_kb": 64, 00:07:00.319 "state": "configuring", 00:07:00.319 "raid_level": "raid0", 00:07:00.319 "superblock": false, 00:07:00.319 "num_base_bdevs": 2, 00:07:00.319 "num_base_bdevs_discovered": 1, 00:07:00.319 "num_base_bdevs_operational": 2, 00:07:00.319 "base_bdevs_list": [ 00:07:00.319 { 00:07:00.319 "name": "BaseBdev1", 00:07:00.319 "uuid": "428262ad-62f3-4c42-8d7a-0a05b4afaa9d", 00:07:00.319 "is_configured": true, 00:07:00.319 "data_offset": 0, 00:07:00.319 "data_size": 65536 00:07:00.319 }, 00:07:00.319 { 00:07:00.319 "name": "BaseBdev2", 00:07:00.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.319 "is_configured": false, 00:07:00.319 "data_offset": 0, 00:07:00.319 "data_size": 0 00:07:00.319 } 00:07:00.319 ] 00:07:00.319 }' 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.319 04:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.580 [2024-11-21 04:53:17.251836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.580 [2024-11-21 04:53:17.251881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.580 [2024-11-21 04:53:17.263839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.580 [2024-11-21 04:53:17.266006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.580 [2024-11-21 04:53:17.266041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.580 "name": "Existed_Raid", 00:07:00.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.580 "strip_size_kb": 64, 00:07:00.580 "state": "configuring", 00:07:00.580 "raid_level": "raid0", 00:07:00.580 "superblock": false, 00:07:00.580 "num_base_bdevs": 2, 00:07:00.580 "num_base_bdevs_discovered": 1, 00:07:00.580 "num_base_bdevs_operational": 2, 00:07:00.580 "base_bdevs_list": [ 00:07:00.580 { 00:07:00.580 "name": "BaseBdev1", 00:07:00.580 "uuid": "428262ad-62f3-4c42-8d7a-0a05b4afaa9d", 00:07:00.580 "is_configured": true, 00:07:00.580 "data_offset": 0, 00:07:00.580 "data_size": 65536 00:07:00.580 }, 00:07:00.580 { 00:07:00.580 "name": "BaseBdev2", 00:07:00.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.580 "is_configured": false, 00:07:00.580 "data_offset": 0, 00:07:00.580 "data_size": 0 00:07:00.580 } 00:07:00.580 ] 00:07:00.580 }' 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.580 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.148 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:01.148 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.148 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.149 [2024-11-21 04:53:17.735782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.149 [2024-11-21 04:53:17.735829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:01.149 [2024-11-21 04:53:17.735846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.149 [2024-11-21 04:53:17.736186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:01.149 [2024-11-21 04:53:17.736387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:01.149 [2024-11-21 04:53:17.736411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:01.149 [2024-11-21 04:53:17.736626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.149 BaseBdev2 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.149 [ 00:07:01.149 { 00:07:01.149 "name": "BaseBdev2", 00:07:01.149 "aliases": [ 00:07:01.149 "d7c54e76-1245-4050-8ae9-300944ae103c" 00:07:01.149 ], 00:07:01.149 "product_name": "Malloc disk", 00:07:01.149 "block_size": 512, 00:07:01.149 "num_blocks": 65536, 00:07:01.149 "uuid": "d7c54e76-1245-4050-8ae9-300944ae103c", 00:07:01.149 "assigned_rate_limits": { 00:07:01.149 "rw_ios_per_sec": 0, 00:07:01.149 "rw_mbytes_per_sec": 0, 00:07:01.149 "r_mbytes_per_sec": 0, 00:07:01.149 "w_mbytes_per_sec": 0 00:07:01.149 }, 00:07:01.149 "claimed": true, 00:07:01.149 "claim_type": "exclusive_write", 00:07:01.149 "zoned": false, 00:07:01.149 "supported_io_types": { 00:07:01.149 "read": true, 00:07:01.149 "write": true, 00:07:01.149 "unmap": true, 00:07:01.149 "flush": true, 00:07:01.149 "reset": true, 00:07:01.149 "nvme_admin": false, 00:07:01.149 "nvme_io": false, 00:07:01.149 "nvme_io_md": false, 00:07:01.149 "write_zeroes": true, 00:07:01.149 "zcopy": true, 00:07:01.149 "get_zone_info": false, 00:07:01.149 "zone_management": false, 00:07:01.149 "zone_append": false, 00:07:01.149 "compare": false, 00:07:01.149 "compare_and_write": false, 00:07:01.149 "abort": true, 00:07:01.149 "seek_hole": false, 00:07:01.149 "seek_data": false, 00:07:01.149 "copy": true, 00:07:01.149 "nvme_iov_md": false 00:07:01.149 }, 00:07:01.149 "memory_domains": [ 00:07:01.149 { 00:07:01.149 "dma_device_id": "system", 00:07:01.149 "dma_device_type": 1 00:07:01.149 }, 00:07:01.149 { 00:07:01.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.149 "dma_device_type": 2 00:07:01.149 } 00:07:01.149 ], 00:07:01.149 "driver_specific": {} 00:07:01.149 } 00:07:01.149 ] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.149 "name": "Existed_Raid", 00:07:01.149 "uuid": "fc4161db-dfd7-4999-8ad7-b1eecdad9405", 00:07:01.149 "strip_size_kb": 64, 00:07:01.149 "state": "online", 00:07:01.149 "raid_level": "raid0", 00:07:01.149 "superblock": false, 00:07:01.149 "num_base_bdevs": 2, 00:07:01.149 "num_base_bdevs_discovered": 2, 00:07:01.149 "num_base_bdevs_operational": 2, 00:07:01.149 "base_bdevs_list": [ 00:07:01.149 { 00:07:01.149 "name": "BaseBdev1", 00:07:01.149 "uuid": "428262ad-62f3-4c42-8d7a-0a05b4afaa9d", 00:07:01.149 "is_configured": true, 00:07:01.149 "data_offset": 0, 00:07:01.149 "data_size": 65536 00:07:01.149 }, 00:07:01.149 { 00:07:01.149 "name": "BaseBdev2", 00:07:01.149 "uuid": "d7c54e76-1245-4050-8ae9-300944ae103c", 00:07:01.149 "is_configured": true, 00:07:01.149 "data_offset": 0, 00:07:01.149 "data_size": 65536 00:07:01.149 } 00:07:01.149 ] 00:07:01.149 }' 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.149 04:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.717 [2024-11-21 04:53:18.179325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.717 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.717 "name": "Existed_Raid", 00:07:01.717 "aliases": [ 00:07:01.717 "fc4161db-dfd7-4999-8ad7-b1eecdad9405" 00:07:01.717 ], 00:07:01.717 "product_name": "Raid Volume", 00:07:01.717 "block_size": 512, 00:07:01.717 "num_blocks": 131072, 00:07:01.717 "uuid": "fc4161db-dfd7-4999-8ad7-b1eecdad9405", 00:07:01.718 "assigned_rate_limits": { 00:07:01.718 "rw_ios_per_sec": 0, 00:07:01.718 "rw_mbytes_per_sec": 0, 00:07:01.718 "r_mbytes_per_sec": 0, 00:07:01.718 "w_mbytes_per_sec": 0 00:07:01.718 }, 00:07:01.718 "claimed": false, 00:07:01.718 "zoned": false, 00:07:01.718 "supported_io_types": { 00:07:01.718 "read": true, 00:07:01.718 "write": true, 00:07:01.718 "unmap": true, 00:07:01.718 "flush": true, 00:07:01.718 "reset": true, 00:07:01.718 "nvme_admin": false, 00:07:01.718 "nvme_io": false, 00:07:01.718 "nvme_io_md": false, 00:07:01.718 "write_zeroes": true, 00:07:01.718 "zcopy": false, 00:07:01.718 "get_zone_info": false, 00:07:01.718 "zone_management": false, 00:07:01.718 "zone_append": false, 00:07:01.718 "compare": false, 00:07:01.718 "compare_and_write": false, 00:07:01.718 "abort": false, 00:07:01.718 "seek_hole": false, 00:07:01.718 "seek_data": false, 00:07:01.718 "copy": false, 00:07:01.718 "nvme_iov_md": false 00:07:01.718 }, 00:07:01.718 "memory_domains": [ 00:07:01.718 { 00:07:01.718 "dma_device_id": "system", 00:07:01.718 "dma_device_type": 1 00:07:01.718 }, 00:07:01.718 { 00:07:01.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.718 "dma_device_type": 2 00:07:01.718 }, 00:07:01.718 { 00:07:01.718 "dma_device_id": "system", 00:07:01.718 "dma_device_type": 1 00:07:01.718 }, 00:07:01.718 { 00:07:01.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.718 "dma_device_type": 2 00:07:01.718 } 00:07:01.718 ], 00:07:01.718 "driver_specific": { 00:07:01.718 "raid": { 00:07:01.718 "uuid": "fc4161db-dfd7-4999-8ad7-b1eecdad9405", 00:07:01.718 "strip_size_kb": 64, 00:07:01.718 "state": "online", 00:07:01.718 "raid_level": "raid0", 00:07:01.718 "superblock": false, 00:07:01.718 "num_base_bdevs": 2, 00:07:01.718 "num_base_bdevs_discovered": 2, 00:07:01.718 "num_base_bdevs_operational": 2, 00:07:01.718 "base_bdevs_list": [ 00:07:01.718 { 00:07:01.718 "name": "BaseBdev1", 00:07:01.718 "uuid": "428262ad-62f3-4c42-8d7a-0a05b4afaa9d", 00:07:01.718 "is_configured": true, 00:07:01.718 "data_offset": 0, 00:07:01.718 "data_size": 65536 00:07:01.718 }, 00:07:01.718 { 00:07:01.718 "name": "BaseBdev2", 00:07:01.718 "uuid": "d7c54e76-1245-4050-8ae9-300944ae103c", 00:07:01.718 "is_configured": true, 00:07:01.718 "data_offset": 0, 00:07:01.718 "data_size": 65536 00:07:01.718 } 00:07:01.718 ] 00:07:01.718 } 00:07:01.718 } 00:07:01.718 }' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:01.718 BaseBdev2' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.718 [2024-11-21 04:53:18.390742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.718 [2024-11-21 04:53:18.390771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.718 [2024-11-21 04:53:18.390823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.718 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.978 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.978 "name": "Existed_Raid", 00:07:01.978 "uuid": "fc4161db-dfd7-4999-8ad7-b1eecdad9405", 00:07:01.978 "strip_size_kb": 64, 00:07:01.978 "state": "offline", 00:07:01.978 "raid_level": "raid0", 00:07:01.978 "superblock": false, 00:07:01.978 "num_base_bdevs": 2, 00:07:01.978 "num_base_bdevs_discovered": 1, 00:07:01.978 "num_base_bdevs_operational": 1, 00:07:01.978 "base_bdevs_list": [ 00:07:01.978 { 00:07:01.978 "name": null, 00:07:01.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.978 "is_configured": false, 00:07:01.978 "data_offset": 0, 00:07:01.978 "data_size": 65536 00:07:01.978 }, 00:07:01.978 { 00:07:01.978 "name": "BaseBdev2", 00:07:01.978 "uuid": "d7c54e76-1245-4050-8ae9-300944ae103c", 00:07:01.978 "is_configured": true, 00:07:01.978 "data_offset": 0, 00:07:01.978 "data_size": 65536 00:07:01.978 } 00:07:01.978 ] 00:07:01.978 }' 00:07:01.978 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.978 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.237 [2024-11-21 04:53:18.870081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:02.237 [2024-11-21 04:53:18.870186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72244 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72244 ']' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72244 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72244 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.237 killing process with pid 72244 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72244' 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72244 00:07:02.237 [2024-11-21 04:53:18.968366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.237 04:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72244 00:07:02.237 [2024-11-21 04:53:18.970048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.804 00:07:02.804 real 0m3.902s 00:07:02.804 user 0m6.015s 00:07:02.804 sys 0m0.825s 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 ************************************ 00:07:02.804 END TEST raid_state_function_test 00:07:02.804 ************************************ 00:07:02.804 04:53:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:02.804 04:53:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.804 04:53:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.804 04:53:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 ************************************ 00:07:02.804 START TEST raid_state_function_test_sb 00:07:02.804 ************************************ 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72481 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.804 Process raid pid: 72481 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72481' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72481 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72481 ']' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.804 04:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 [2024-11-21 04:53:19.449540] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:02.804 [2024-11-21 04:53:19.449682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.063 [2024-11-21 04:53:19.627419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.063 [2024-11-21 04:53:19.669516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.063 [2024-11-21 04:53:19.745724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.063 [2024-11-21 04:53:19.745773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.635 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.636 [2024-11-21 04:53:20.273254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.636 [2024-11-21 04:53:20.273340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.636 [2024-11-21 04:53:20.273351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.636 [2024-11-21 04:53:20.273362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.636 "name": "Existed_Raid", 00:07:03.636 "uuid": "7febd512-0f53-430b-a5b3-fc299b5c5f26", 00:07:03.636 "strip_size_kb": 64, 00:07:03.636 "state": "configuring", 00:07:03.636 "raid_level": "raid0", 00:07:03.636 "superblock": true, 00:07:03.636 "num_base_bdevs": 2, 00:07:03.636 "num_base_bdevs_discovered": 0, 00:07:03.636 "num_base_bdevs_operational": 2, 00:07:03.636 "base_bdevs_list": [ 00:07:03.636 { 00:07:03.636 "name": "BaseBdev1", 00:07:03.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.636 "is_configured": false, 00:07:03.636 "data_offset": 0, 00:07:03.636 "data_size": 0 00:07:03.636 }, 00:07:03.636 { 00:07:03.636 "name": "BaseBdev2", 00:07:03.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.636 "is_configured": false, 00:07:03.636 "data_offset": 0, 00:07:03.636 "data_size": 0 00:07:03.636 } 00:07:03.636 ] 00:07:03.636 }' 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.636 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 [2024-11-21 04:53:20.688464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.210 [2024-11-21 04:53:20.688541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 [2024-11-21 04:53:20.700414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.210 [2024-11-21 04:53:20.700454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.210 [2024-11-21 04:53:20.700462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.210 [2024-11-21 04:53:20.700472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 [2024-11-21 04:53:20.727391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.210 BaseBdev1 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.210 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 [ 00:07:04.210 { 00:07:04.210 "name": "BaseBdev1", 00:07:04.210 "aliases": [ 00:07:04.210 "86f65ffd-7679-447a-9dcb-cc8dcf2778c7" 00:07:04.210 ], 00:07:04.210 "product_name": "Malloc disk", 00:07:04.210 "block_size": 512, 00:07:04.210 "num_blocks": 65536, 00:07:04.210 "uuid": "86f65ffd-7679-447a-9dcb-cc8dcf2778c7", 00:07:04.210 "assigned_rate_limits": { 00:07:04.210 "rw_ios_per_sec": 0, 00:07:04.210 "rw_mbytes_per_sec": 0, 00:07:04.210 "r_mbytes_per_sec": 0, 00:07:04.210 "w_mbytes_per_sec": 0 00:07:04.210 }, 00:07:04.210 "claimed": true, 00:07:04.210 "claim_type": "exclusive_write", 00:07:04.210 "zoned": false, 00:07:04.210 "supported_io_types": { 00:07:04.210 "read": true, 00:07:04.210 "write": true, 00:07:04.210 "unmap": true, 00:07:04.210 "flush": true, 00:07:04.210 "reset": true, 00:07:04.210 "nvme_admin": false, 00:07:04.210 "nvme_io": false, 00:07:04.210 "nvme_io_md": false, 00:07:04.210 "write_zeroes": true, 00:07:04.210 "zcopy": true, 00:07:04.210 "get_zone_info": false, 00:07:04.210 "zone_management": false, 00:07:04.210 "zone_append": false, 00:07:04.210 "compare": false, 00:07:04.210 "compare_and_write": false, 00:07:04.210 "abort": true, 00:07:04.211 "seek_hole": false, 00:07:04.211 "seek_data": false, 00:07:04.211 "copy": true, 00:07:04.211 "nvme_iov_md": false 00:07:04.211 }, 00:07:04.211 "memory_domains": [ 00:07:04.211 { 00:07:04.211 "dma_device_id": "system", 00:07:04.211 "dma_device_type": 1 00:07:04.211 }, 00:07:04.211 { 00:07:04.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.211 "dma_device_type": 2 00:07:04.211 } 00:07:04.211 ], 00:07:04.211 "driver_specific": {} 00:07:04.211 } 00:07:04.211 ] 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.211 "name": "Existed_Raid", 00:07:04.211 "uuid": "d28ff9c7-ebd8-4aa9-8958-bd5b7a8545e0", 00:07:04.211 "strip_size_kb": 64, 00:07:04.211 "state": "configuring", 00:07:04.211 "raid_level": "raid0", 00:07:04.211 "superblock": true, 00:07:04.211 "num_base_bdevs": 2, 00:07:04.211 "num_base_bdevs_discovered": 1, 00:07:04.211 "num_base_bdevs_operational": 2, 00:07:04.211 "base_bdevs_list": [ 00:07:04.211 { 00:07:04.211 "name": "BaseBdev1", 00:07:04.211 "uuid": "86f65ffd-7679-447a-9dcb-cc8dcf2778c7", 00:07:04.211 "is_configured": true, 00:07:04.211 "data_offset": 2048, 00:07:04.211 "data_size": 63488 00:07:04.211 }, 00:07:04.211 { 00:07:04.211 "name": "BaseBdev2", 00:07:04.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.211 "is_configured": false, 00:07:04.211 "data_offset": 0, 00:07:04.211 "data_size": 0 00:07:04.211 } 00:07:04.211 ] 00:07:04.211 }' 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.211 04:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.480 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.480 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.480 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.740 [2024-11-21 04:53:21.214657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.740 [2024-11-21 04:53:21.214740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.741 [2024-11-21 04:53:21.226713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.741 [2024-11-21 04:53:21.228980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.741 [2024-11-21 04:53:21.229028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.741 "name": "Existed_Raid", 00:07:04.741 "uuid": "09407ca8-be8d-4775-9617-bd0fb375a190", 00:07:04.741 "strip_size_kb": 64, 00:07:04.741 "state": "configuring", 00:07:04.741 "raid_level": "raid0", 00:07:04.741 "superblock": true, 00:07:04.741 "num_base_bdevs": 2, 00:07:04.741 "num_base_bdevs_discovered": 1, 00:07:04.741 "num_base_bdevs_operational": 2, 00:07:04.741 "base_bdevs_list": [ 00:07:04.741 { 00:07:04.741 "name": "BaseBdev1", 00:07:04.741 "uuid": "86f65ffd-7679-447a-9dcb-cc8dcf2778c7", 00:07:04.741 "is_configured": true, 00:07:04.741 "data_offset": 2048, 00:07:04.741 "data_size": 63488 00:07:04.741 }, 00:07:04.741 { 00:07:04.741 "name": "BaseBdev2", 00:07:04.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.741 "is_configured": false, 00:07:04.741 "data_offset": 0, 00:07:04.741 "data_size": 0 00:07:04.741 } 00:07:04.741 ] 00:07:04.741 }' 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.741 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 [2024-11-21 04:53:21.654804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.002 [2024-11-21 04:53:21.655032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:05.002 [2024-11-21 04:53:21.655053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.002 BaseBdev2 00:07:05.002 [2024-11-21 04:53:21.655412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.002 [2024-11-21 04:53:21.655600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:05.002 [2024-11-21 04:53:21.655624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:05.002 [2024-11-21 04:53:21.655771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 [ 00:07:05.002 { 00:07:05.002 "name": "BaseBdev2", 00:07:05.002 "aliases": [ 00:07:05.002 "8a16a56b-e0ef-4db9-896a-dbea678f6d16" 00:07:05.002 ], 00:07:05.002 "product_name": "Malloc disk", 00:07:05.002 "block_size": 512, 00:07:05.002 "num_blocks": 65536, 00:07:05.002 "uuid": "8a16a56b-e0ef-4db9-896a-dbea678f6d16", 00:07:05.002 "assigned_rate_limits": { 00:07:05.002 "rw_ios_per_sec": 0, 00:07:05.002 "rw_mbytes_per_sec": 0, 00:07:05.002 "r_mbytes_per_sec": 0, 00:07:05.002 "w_mbytes_per_sec": 0 00:07:05.002 }, 00:07:05.002 "claimed": true, 00:07:05.002 "claim_type": "exclusive_write", 00:07:05.002 "zoned": false, 00:07:05.002 "supported_io_types": { 00:07:05.002 "read": true, 00:07:05.002 "write": true, 00:07:05.002 "unmap": true, 00:07:05.002 "flush": true, 00:07:05.002 "reset": true, 00:07:05.002 "nvme_admin": false, 00:07:05.002 "nvme_io": false, 00:07:05.002 "nvme_io_md": false, 00:07:05.002 "write_zeroes": true, 00:07:05.002 "zcopy": true, 00:07:05.002 "get_zone_info": false, 00:07:05.002 "zone_management": false, 00:07:05.002 "zone_append": false, 00:07:05.002 "compare": false, 00:07:05.002 "compare_and_write": false, 00:07:05.002 "abort": true, 00:07:05.002 "seek_hole": false, 00:07:05.002 "seek_data": false, 00:07:05.002 "copy": true, 00:07:05.002 "nvme_iov_md": false 00:07:05.002 }, 00:07:05.002 "memory_domains": [ 00:07:05.002 { 00:07:05.002 "dma_device_id": "system", 00:07:05.002 "dma_device_type": 1 00:07:05.002 }, 00:07:05.002 { 00:07:05.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.002 "dma_device_type": 2 00:07:05.002 } 00:07:05.002 ], 00:07:05.002 "driver_specific": {} 00:07:05.002 } 00:07:05.002 ] 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.262 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.262 "name": "Existed_Raid", 00:07:05.262 "uuid": "09407ca8-be8d-4775-9617-bd0fb375a190", 00:07:05.262 "strip_size_kb": 64, 00:07:05.262 "state": "online", 00:07:05.262 "raid_level": "raid0", 00:07:05.262 "superblock": true, 00:07:05.262 "num_base_bdevs": 2, 00:07:05.262 "num_base_bdevs_discovered": 2, 00:07:05.262 "num_base_bdevs_operational": 2, 00:07:05.262 "base_bdevs_list": [ 00:07:05.262 { 00:07:05.262 "name": "BaseBdev1", 00:07:05.262 "uuid": "86f65ffd-7679-447a-9dcb-cc8dcf2778c7", 00:07:05.262 "is_configured": true, 00:07:05.262 "data_offset": 2048, 00:07:05.262 "data_size": 63488 00:07:05.262 }, 00:07:05.262 { 00:07:05.262 "name": "BaseBdev2", 00:07:05.262 "uuid": "8a16a56b-e0ef-4db9-896a-dbea678f6d16", 00:07:05.262 "is_configured": true, 00:07:05.262 "data_offset": 2048, 00:07:05.262 "data_size": 63488 00:07:05.262 } 00:07:05.262 ] 00:07:05.262 }' 00:07:05.262 04:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.262 04:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.522 [2024-11-21 04:53:22.082422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.522 "name": "Existed_Raid", 00:07:05.522 "aliases": [ 00:07:05.522 "09407ca8-be8d-4775-9617-bd0fb375a190" 00:07:05.522 ], 00:07:05.522 "product_name": "Raid Volume", 00:07:05.522 "block_size": 512, 00:07:05.522 "num_blocks": 126976, 00:07:05.522 "uuid": "09407ca8-be8d-4775-9617-bd0fb375a190", 00:07:05.522 "assigned_rate_limits": { 00:07:05.522 "rw_ios_per_sec": 0, 00:07:05.522 "rw_mbytes_per_sec": 0, 00:07:05.522 "r_mbytes_per_sec": 0, 00:07:05.522 "w_mbytes_per_sec": 0 00:07:05.522 }, 00:07:05.522 "claimed": false, 00:07:05.522 "zoned": false, 00:07:05.522 "supported_io_types": { 00:07:05.522 "read": true, 00:07:05.522 "write": true, 00:07:05.522 "unmap": true, 00:07:05.522 "flush": true, 00:07:05.522 "reset": true, 00:07:05.522 "nvme_admin": false, 00:07:05.522 "nvme_io": false, 00:07:05.522 "nvme_io_md": false, 00:07:05.522 "write_zeroes": true, 00:07:05.522 "zcopy": false, 00:07:05.522 "get_zone_info": false, 00:07:05.522 "zone_management": false, 00:07:05.522 "zone_append": false, 00:07:05.522 "compare": false, 00:07:05.522 "compare_and_write": false, 00:07:05.522 "abort": false, 00:07:05.522 "seek_hole": false, 00:07:05.522 "seek_data": false, 00:07:05.522 "copy": false, 00:07:05.522 "nvme_iov_md": false 00:07:05.522 }, 00:07:05.522 "memory_domains": [ 00:07:05.522 { 00:07:05.522 "dma_device_id": "system", 00:07:05.522 "dma_device_type": 1 00:07:05.522 }, 00:07:05.522 { 00:07:05.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.522 "dma_device_type": 2 00:07:05.522 }, 00:07:05.522 { 00:07:05.522 "dma_device_id": "system", 00:07:05.522 "dma_device_type": 1 00:07:05.522 }, 00:07:05.522 { 00:07:05.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.522 "dma_device_type": 2 00:07:05.522 } 00:07:05.522 ], 00:07:05.522 "driver_specific": { 00:07:05.522 "raid": { 00:07:05.522 "uuid": "09407ca8-be8d-4775-9617-bd0fb375a190", 00:07:05.522 "strip_size_kb": 64, 00:07:05.522 "state": "online", 00:07:05.522 "raid_level": "raid0", 00:07:05.522 "superblock": true, 00:07:05.522 "num_base_bdevs": 2, 00:07:05.522 "num_base_bdevs_discovered": 2, 00:07:05.522 "num_base_bdevs_operational": 2, 00:07:05.522 "base_bdevs_list": [ 00:07:05.522 { 00:07:05.522 "name": "BaseBdev1", 00:07:05.522 "uuid": "86f65ffd-7679-447a-9dcb-cc8dcf2778c7", 00:07:05.522 "is_configured": true, 00:07:05.522 "data_offset": 2048, 00:07:05.522 "data_size": 63488 00:07:05.522 }, 00:07:05.522 { 00:07:05.522 "name": "BaseBdev2", 00:07:05.522 "uuid": "8a16a56b-e0ef-4db9-896a-dbea678f6d16", 00:07:05.522 "is_configured": true, 00:07:05.522 "data_offset": 2048, 00:07:05.522 "data_size": 63488 00:07:05.522 } 00:07:05.522 ] 00:07:05.522 } 00:07:05.522 } 00:07:05.522 }' 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:05.522 BaseBdev2' 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:05.522 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.523 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:05.523 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.523 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.523 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.523 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.783 [2024-11-21 04:53:22.297795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:05.783 [2024-11-21 04:53:22.297823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.783 [2024-11-21 04:53:22.297866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.783 "name": "Existed_Raid", 00:07:05.783 "uuid": "09407ca8-be8d-4775-9617-bd0fb375a190", 00:07:05.783 "strip_size_kb": 64, 00:07:05.783 "state": "offline", 00:07:05.783 "raid_level": "raid0", 00:07:05.783 "superblock": true, 00:07:05.783 "num_base_bdevs": 2, 00:07:05.783 "num_base_bdevs_discovered": 1, 00:07:05.783 "num_base_bdevs_operational": 1, 00:07:05.783 "base_bdevs_list": [ 00:07:05.783 { 00:07:05.783 "name": null, 00:07:05.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.783 "is_configured": false, 00:07:05.783 "data_offset": 0, 00:07:05.783 "data_size": 63488 00:07:05.783 }, 00:07:05.783 { 00:07:05.783 "name": "BaseBdev2", 00:07:05.783 "uuid": "8a16a56b-e0ef-4db9-896a-dbea678f6d16", 00:07:05.783 "is_configured": true, 00:07:05.783 "data_offset": 2048, 00:07:05.783 "data_size": 63488 00:07:05.783 } 00:07:05.783 ] 00:07:05.783 }' 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.783 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.043 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 [2024-11-21 04:53:22.760668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:06.043 [2024-11-21 04:53:22.760724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72481 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72481 ']' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72481 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72481 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.303 killing process with pid 72481 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72481' 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72481 00:07:06.303 [2024-11-21 04:53:22.877485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.303 04:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72481 00:07:06.303 [2024-11-21 04:53:22.879061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.563 04:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.563 00:07:06.563 real 0m3.850s 00:07:06.563 user 0m5.887s 00:07:06.563 sys 0m0.828s 00:07:06.563 04:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.563 04:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.563 ************************************ 00:07:06.563 END TEST raid_state_function_test_sb 00:07:06.563 ************************************ 00:07:06.563 04:53:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:06.563 04:53:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:06.563 04:53:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.563 04:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.563 ************************************ 00:07:06.563 START TEST raid_superblock_test 00:07:06.563 ************************************ 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:06.563 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72722 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72722 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72722 ']' 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.564 04:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.823 [2024-11-21 04:53:23.364784] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:06.823 [2024-11-21 04:53:23.364918] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72722 ] 00:07:06.823 [2024-11-21 04:53:23.528442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.082 [2024-11-21 04:53:23.569012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.082 [2024-11-21 04:53:23.644740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.082 [2024-11-21 04:53:23.644788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 malloc1 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 [2024-11-21 04:53:24.195135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.651 [2024-11-21 04:53:24.195205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.651 [2024-11-21 04:53:24.195227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.651 [2024-11-21 04:53:24.195250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.651 [2024-11-21 04:53:24.197660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.651 [2024-11-21 04:53:24.197695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.651 pt1 00:07:07.651 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 malloc2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 [2024-11-21 04:53:24.229567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:07.652 [2024-11-21 04:53:24.229617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.652 [2024-11-21 04:53:24.229632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:07.652 [2024-11-21 04:53:24.229643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.652 [2024-11-21 04:53:24.232004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.652 [2024-11-21 04:53:24.232037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:07.652 pt2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 [2024-11-21 04:53:24.241597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.652 [2024-11-21 04:53:24.243698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:07.652 [2024-11-21 04:53:24.243838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:07.652 [2024-11-21 04:53:24.243853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.652 [2024-11-21 04:53:24.244131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.652 [2024-11-21 04:53:24.244281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:07.652 [2024-11-21 04:53:24.244314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:07.652 [2024-11-21 04:53:24.244448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.652 "name": "raid_bdev1", 00:07:07.652 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:07.652 "strip_size_kb": 64, 00:07:07.652 "state": "online", 00:07:07.652 "raid_level": "raid0", 00:07:07.652 "superblock": true, 00:07:07.652 "num_base_bdevs": 2, 00:07:07.652 "num_base_bdevs_discovered": 2, 00:07:07.652 "num_base_bdevs_operational": 2, 00:07:07.652 "base_bdevs_list": [ 00:07:07.652 { 00:07:07.652 "name": "pt1", 00:07:07.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.652 "is_configured": true, 00:07:07.652 "data_offset": 2048, 00:07:07.652 "data_size": 63488 00:07:07.652 }, 00:07:07.652 { 00:07:07.652 "name": "pt2", 00:07:07.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.652 "is_configured": true, 00:07:07.652 "data_offset": 2048, 00:07:07.652 "data_size": 63488 00:07:07.652 } 00:07:07.652 ] 00:07:07.652 }' 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.652 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.222 [2024-11-21 04:53:24.673078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.222 "name": "raid_bdev1", 00:07:08.222 "aliases": [ 00:07:08.222 "618a38e4-ad12-4782-b33b-d7bafd7f96b0" 00:07:08.222 ], 00:07:08.222 "product_name": "Raid Volume", 00:07:08.222 "block_size": 512, 00:07:08.222 "num_blocks": 126976, 00:07:08.222 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:08.222 "assigned_rate_limits": { 00:07:08.222 "rw_ios_per_sec": 0, 00:07:08.222 "rw_mbytes_per_sec": 0, 00:07:08.222 "r_mbytes_per_sec": 0, 00:07:08.222 "w_mbytes_per_sec": 0 00:07:08.222 }, 00:07:08.222 "claimed": false, 00:07:08.222 "zoned": false, 00:07:08.222 "supported_io_types": { 00:07:08.222 "read": true, 00:07:08.222 "write": true, 00:07:08.222 "unmap": true, 00:07:08.222 "flush": true, 00:07:08.222 "reset": true, 00:07:08.222 "nvme_admin": false, 00:07:08.222 "nvme_io": false, 00:07:08.222 "nvme_io_md": false, 00:07:08.222 "write_zeroes": true, 00:07:08.222 "zcopy": false, 00:07:08.222 "get_zone_info": false, 00:07:08.222 "zone_management": false, 00:07:08.222 "zone_append": false, 00:07:08.222 "compare": false, 00:07:08.222 "compare_and_write": false, 00:07:08.222 "abort": false, 00:07:08.222 "seek_hole": false, 00:07:08.222 "seek_data": false, 00:07:08.222 "copy": false, 00:07:08.222 "nvme_iov_md": false 00:07:08.222 }, 00:07:08.222 "memory_domains": [ 00:07:08.222 { 00:07:08.222 "dma_device_id": "system", 00:07:08.222 "dma_device_type": 1 00:07:08.222 }, 00:07:08.222 { 00:07:08.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.222 "dma_device_type": 2 00:07:08.222 }, 00:07:08.222 { 00:07:08.222 "dma_device_id": "system", 00:07:08.222 "dma_device_type": 1 00:07:08.222 }, 00:07:08.222 { 00:07:08.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.222 "dma_device_type": 2 00:07:08.222 } 00:07:08.222 ], 00:07:08.222 "driver_specific": { 00:07:08.222 "raid": { 00:07:08.222 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:08.222 "strip_size_kb": 64, 00:07:08.222 "state": "online", 00:07:08.222 "raid_level": "raid0", 00:07:08.222 "superblock": true, 00:07:08.222 "num_base_bdevs": 2, 00:07:08.222 "num_base_bdevs_discovered": 2, 00:07:08.222 "num_base_bdevs_operational": 2, 00:07:08.222 "base_bdevs_list": [ 00:07:08.222 { 00:07:08.222 "name": "pt1", 00:07:08.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.222 "is_configured": true, 00:07:08.222 "data_offset": 2048, 00:07:08.222 "data_size": 63488 00:07:08.222 }, 00:07:08.222 { 00:07:08.222 "name": "pt2", 00:07:08.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.222 "is_configured": true, 00:07:08.222 "data_offset": 2048, 00:07:08.222 "data_size": 63488 00:07:08.222 } 00:07:08.222 ] 00:07:08.222 } 00:07:08.222 } 00:07:08.222 }' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.222 pt2' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.222 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 [2024-11-21 04:53:24.916601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=618a38e4-ad12-4782-b33b-d7bafd7f96b0 00:07:08.223 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 618a38e4-ad12-4782-b33b-d7bafd7f96b0 ']' 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 [2024-11-21 04:53:24.960299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.483 [2024-11-21 04:53:24.960336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.483 [2024-11-21 04:53:24.960413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.483 [2024-11-21 04:53:24.960468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.483 [2024-11-21 04:53:24.960478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 04:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.483 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.483 [2024-11-21 04:53:25.096130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.483 [2024-11-21 04:53:25.098279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.483 [2024-11-21 04:53:25.098345] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:08.483 [2024-11-21 04:53:25.098391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:08.483 [2024-11-21 04:53:25.098405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.483 [2024-11-21 04:53:25.098418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:08.483 request: 00:07:08.483 { 00:07:08.483 "name": "raid_bdev1", 00:07:08.483 "raid_level": "raid0", 00:07:08.483 "base_bdevs": [ 00:07:08.483 "malloc1", 00:07:08.484 "malloc2" 00:07:08.484 ], 00:07:08.484 "strip_size_kb": 64, 00:07:08.484 "superblock": false, 00:07:08.484 "method": "bdev_raid_create", 00:07:08.484 "req_id": 1 00:07:08.484 } 00:07:08.484 Got JSON-RPC error response 00:07:08.484 response: 00:07:08.484 { 00:07:08.484 "code": -17, 00:07:08.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:08.484 } 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 [2024-11-21 04:53:25.147973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.484 [2024-11-21 04:53:25.148012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.484 [2024-11-21 04:53:25.148029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:08.484 [2024-11-21 04:53:25.148038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.484 [2024-11-21 04:53:25.150451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.484 [2024-11-21 04:53:25.150480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.484 [2024-11-21 04:53:25.150540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:08.484 [2024-11-21 04:53:25.150576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.484 pt1 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.484 "name": "raid_bdev1", 00:07:08.484 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:08.484 "strip_size_kb": 64, 00:07:08.484 "state": "configuring", 00:07:08.484 "raid_level": "raid0", 00:07:08.484 "superblock": true, 00:07:08.484 "num_base_bdevs": 2, 00:07:08.484 "num_base_bdevs_discovered": 1, 00:07:08.484 "num_base_bdevs_operational": 2, 00:07:08.484 "base_bdevs_list": [ 00:07:08.484 { 00:07:08.484 "name": "pt1", 00:07:08.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.484 "is_configured": true, 00:07:08.484 "data_offset": 2048, 00:07:08.484 "data_size": 63488 00:07:08.484 }, 00:07:08.484 { 00:07:08.484 "name": null, 00:07:08.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.484 "is_configured": false, 00:07:08.484 "data_offset": 2048, 00:07:08.484 "data_size": 63488 00:07:08.484 } 00:07:08.484 ] 00:07:08.484 }' 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.484 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 [2024-11-21 04:53:25.579228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.053 [2024-11-21 04:53:25.579280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.053 [2024-11-21 04:53:25.579300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:09.053 [2024-11-21 04:53:25.579309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.053 [2024-11-21 04:53:25.579680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.053 [2024-11-21 04:53:25.579696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.053 [2024-11-21 04:53:25.579754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:09.053 [2024-11-21 04:53:25.579778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.053 [2024-11-21 04:53:25.579869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:09.053 [2024-11-21 04:53:25.579877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.053 [2024-11-21 04:53:25.580121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:09.053 [2024-11-21 04:53:25.580231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:09.053 [2024-11-21 04:53:25.580247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:09.053 [2024-11-21 04:53:25.580340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.053 pt2 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.053 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.053 "name": "raid_bdev1", 00:07:09.053 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:09.053 "strip_size_kb": 64, 00:07:09.054 "state": "online", 00:07:09.054 "raid_level": "raid0", 00:07:09.054 "superblock": true, 00:07:09.054 "num_base_bdevs": 2, 00:07:09.054 "num_base_bdevs_discovered": 2, 00:07:09.054 "num_base_bdevs_operational": 2, 00:07:09.054 "base_bdevs_list": [ 00:07:09.054 { 00:07:09.054 "name": "pt1", 00:07:09.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.054 "is_configured": true, 00:07:09.054 "data_offset": 2048, 00:07:09.054 "data_size": 63488 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "name": "pt2", 00:07:09.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.054 "is_configured": true, 00:07:09.054 "data_offset": 2048, 00:07:09.054 "data_size": 63488 00:07:09.054 } 00:07:09.054 ] 00:07:09.054 }' 00:07:09.054 04:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.054 04:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.314 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.314 [2024-11-21 04:53:26.038797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.574 "name": "raid_bdev1", 00:07:09.574 "aliases": [ 00:07:09.574 "618a38e4-ad12-4782-b33b-d7bafd7f96b0" 00:07:09.574 ], 00:07:09.574 "product_name": "Raid Volume", 00:07:09.574 "block_size": 512, 00:07:09.574 "num_blocks": 126976, 00:07:09.574 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:09.574 "assigned_rate_limits": { 00:07:09.574 "rw_ios_per_sec": 0, 00:07:09.574 "rw_mbytes_per_sec": 0, 00:07:09.574 "r_mbytes_per_sec": 0, 00:07:09.574 "w_mbytes_per_sec": 0 00:07:09.574 }, 00:07:09.574 "claimed": false, 00:07:09.574 "zoned": false, 00:07:09.574 "supported_io_types": { 00:07:09.574 "read": true, 00:07:09.574 "write": true, 00:07:09.574 "unmap": true, 00:07:09.574 "flush": true, 00:07:09.574 "reset": true, 00:07:09.574 "nvme_admin": false, 00:07:09.574 "nvme_io": false, 00:07:09.574 "nvme_io_md": false, 00:07:09.574 "write_zeroes": true, 00:07:09.574 "zcopy": false, 00:07:09.574 "get_zone_info": false, 00:07:09.574 "zone_management": false, 00:07:09.574 "zone_append": false, 00:07:09.574 "compare": false, 00:07:09.574 "compare_and_write": false, 00:07:09.574 "abort": false, 00:07:09.574 "seek_hole": false, 00:07:09.574 "seek_data": false, 00:07:09.574 "copy": false, 00:07:09.574 "nvme_iov_md": false 00:07:09.574 }, 00:07:09.574 "memory_domains": [ 00:07:09.574 { 00:07:09.574 "dma_device_id": "system", 00:07:09.574 "dma_device_type": 1 00:07:09.574 }, 00:07:09.574 { 00:07:09.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.574 "dma_device_type": 2 00:07:09.574 }, 00:07:09.574 { 00:07:09.574 "dma_device_id": "system", 00:07:09.574 "dma_device_type": 1 00:07:09.574 }, 00:07:09.574 { 00:07:09.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.574 "dma_device_type": 2 00:07:09.574 } 00:07:09.574 ], 00:07:09.574 "driver_specific": { 00:07:09.574 "raid": { 00:07:09.574 "uuid": "618a38e4-ad12-4782-b33b-d7bafd7f96b0", 00:07:09.574 "strip_size_kb": 64, 00:07:09.574 "state": "online", 00:07:09.574 "raid_level": "raid0", 00:07:09.574 "superblock": true, 00:07:09.574 "num_base_bdevs": 2, 00:07:09.574 "num_base_bdevs_discovered": 2, 00:07:09.574 "num_base_bdevs_operational": 2, 00:07:09.574 "base_bdevs_list": [ 00:07:09.574 { 00:07:09.574 "name": "pt1", 00:07:09.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.574 "is_configured": true, 00:07:09.574 "data_offset": 2048, 00:07:09.574 "data_size": 63488 00:07:09.574 }, 00:07:09.574 { 00:07:09.574 "name": "pt2", 00:07:09.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.574 "is_configured": true, 00:07:09.574 "data_offset": 2048, 00:07:09.574 "data_size": 63488 00:07:09.574 } 00:07:09.574 ] 00:07:09.574 } 00:07:09.574 } 00:07:09.574 }' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.574 pt2' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.574 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.575 [2024-11-21 04:53:26.258437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 618a38e4-ad12-4782-b33b-d7bafd7f96b0 '!=' 618a38e4-ad12-4782-b33b-d7bafd7f96b0 ']' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72722 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72722 ']' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72722 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.575 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72722 00:07:09.835 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.835 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.835 killing process with pid 72722 00:07:09.835 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72722' 00:07:09.835 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72722 00:07:09.835 [2024-11-21 04:53:26.324159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.835 [2024-11-21 04:53:26.324289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.835 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72722 00:07:09.835 [2024-11-21 04:53:26.324369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.835 [2024-11-21 04:53:26.324382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:09.835 [2024-11-21 04:53:26.365363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.095 04:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:10.095 00:07:10.095 real 0m3.419s 00:07:10.095 user 0m5.112s 00:07:10.095 sys 0m0.811s 00:07:10.095 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.095 04:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.095 ************************************ 00:07:10.095 END TEST raid_superblock_test 00:07:10.095 ************************************ 00:07:10.095 04:53:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:10.095 04:53:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:10.095 04:53:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.095 04:53:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.095 ************************************ 00:07:10.095 START TEST raid_read_error_test 00:07:10.095 ************************************ 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.skD408shXB 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72917 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72917 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72917 ']' 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.095 04:53:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.355 [2024-11-21 04:53:26.867878] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:10.355 [2024-11-21 04:53:26.868013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72917 ] 00:07:10.355 [2024-11-21 04:53:27.035353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.355 [2024-11-21 04:53:27.076577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.614 [2024-11-21 04:53:27.152822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.614 [2024-11-21 04:53:27.152894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 BaseBdev1_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 true 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 [2024-11-21 04:53:27.711575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:11.183 [2024-11-21 04:53:27.711636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.183 [2024-11-21 04:53:27.711658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:11.183 [2024-11-21 04:53:27.711667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.183 [2024-11-21 04:53:27.714217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.183 [2024-11-21 04:53:27.714249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:11.183 BaseBdev1 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 BaseBdev2_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 true 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 [2024-11-21 04:53:27.758560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:11.183 [2024-11-21 04:53:27.758635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.183 [2024-11-21 04:53:27.758662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:11.183 [2024-11-21 04:53:27.758672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.183 [2024-11-21 04:53:27.761289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.183 [2024-11-21 04:53:27.761329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:11.183 BaseBdev2 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 [2024-11-21 04:53:27.770595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.183 [2024-11-21 04:53:27.772899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.183 [2024-11-21 04:53:27.773139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:11.183 [2024-11-21 04:53:27.773163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.183 [2024-11-21 04:53:27.773559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:11.183 [2024-11-21 04:53:27.773798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:11.183 [2024-11-21 04:53:27.773819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:11.183 [2024-11-21 04:53:27.774025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.183 "name": "raid_bdev1", 00:07:11.183 "uuid": "9e2e2314-16b2-4765-9cc2-0eebbce4c56f", 00:07:11.183 "strip_size_kb": 64, 00:07:11.183 "state": "online", 00:07:11.183 "raid_level": "raid0", 00:07:11.183 "superblock": true, 00:07:11.183 "num_base_bdevs": 2, 00:07:11.183 "num_base_bdevs_discovered": 2, 00:07:11.183 "num_base_bdevs_operational": 2, 00:07:11.183 "base_bdevs_list": [ 00:07:11.183 { 00:07:11.183 "name": "BaseBdev1", 00:07:11.183 "uuid": "09d93688-9648-5293-bd63-47fd2e78636c", 00:07:11.183 "is_configured": true, 00:07:11.183 "data_offset": 2048, 00:07:11.183 "data_size": 63488 00:07:11.183 }, 00:07:11.183 { 00:07:11.183 "name": "BaseBdev2", 00:07:11.183 "uuid": "3ab68981-0b0a-55a2-840a-3c4c04d574f7", 00:07:11.183 "is_configured": true, 00:07:11.183 "data_offset": 2048, 00:07:11.183 "data_size": 63488 00:07:11.183 } 00:07:11.183 ] 00:07:11.183 }' 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.183 04:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.752 04:53:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:11.752 04:53:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.752 [2024-11-21 04:53:28.314225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.691 "name": "raid_bdev1", 00:07:12.691 "uuid": "9e2e2314-16b2-4765-9cc2-0eebbce4c56f", 00:07:12.691 "strip_size_kb": 64, 00:07:12.691 "state": "online", 00:07:12.691 "raid_level": "raid0", 00:07:12.691 "superblock": true, 00:07:12.691 "num_base_bdevs": 2, 00:07:12.691 "num_base_bdevs_discovered": 2, 00:07:12.691 "num_base_bdevs_operational": 2, 00:07:12.691 "base_bdevs_list": [ 00:07:12.691 { 00:07:12.691 "name": "BaseBdev1", 00:07:12.691 "uuid": "09d93688-9648-5293-bd63-47fd2e78636c", 00:07:12.691 "is_configured": true, 00:07:12.691 "data_offset": 2048, 00:07:12.691 "data_size": 63488 00:07:12.691 }, 00:07:12.691 { 00:07:12.691 "name": "BaseBdev2", 00:07:12.691 "uuid": "3ab68981-0b0a-55a2-840a-3c4c04d574f7", 00:07:12.691 "is_configured": true, 00:07:12.691 "data_offset": 2048, 00:07:12.691 "data_size": 63488 00:07:12.691 } 00:07:12.691 ] 00:07:12.691 }' 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.691 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.950 [2024-11-21 04:53:29.658746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.950 [2024-11-21 04:53:29.658799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.950 [2024-11-21 04:53:29.661387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.950 [2024-11-21 04:53:29.661433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.950 [2024-11-21 04:53:29.661469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.950 [2024-11-21 04:53:29.661495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.950 { 00:07:12.950 "results": [ 00:07:12.950 { 00:07:12.950 "job": "raid_bdev1", 00:07:12.950 "core_mask": "0x1", 00:07:12.950 "workload": "randrw", 00:07:12.950 "percentage": 50, 00:07:12.950 "status": "finished", 00:07:12.950 "queue_depth": 1, 00:07:12.950 "io_size": 131072, 00:07:12.950 "runtime": 1.344776, 00:07:12.950 "iops": 14794.28544233389, 00:07:12.950 "mibps": 1849.2856802917363, 00:07:12.950 "io_failed": 1, 00:07:12.950 "io_timeout": 0, 00:07:12.950 "avg_latency_us": 94.84729642174241, 00:07:12.950 "min_latency_us": 24.705676855895195, 00:07:12.950 "max_latency_us": 1423.7624454148472 00:07:12.950 } 00:07:12.950 ], 00:07:12.950 "core_count": 1 00:07:12.950 } 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72917 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72917 ']' 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72917 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.950 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72917 00:07:13.213 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.213 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.213 killing process with pid 72917 00:07:13.213 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72917' 00:07:13.213 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72917 00:07:13.213 [2024-11-21 04:53:29.710349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.213 04:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72917 00:07:13.213 [2024-11-21 04:53:29.736686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.skD408shXB 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:13.485 00:07:13.485 real 0m3.302s 00:07:13.485 user 0m4.067s 00:07:13.485 sys 0m0.596s 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.485 04:53:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.485 ************************************ 00:07:13.485 END TEST raid_read_error_test 00:07:13.485 ************************************ 00:07:13.485 04:53:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:13.485 04:53:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.485 04:53:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.485 04:53:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.485 ************************************ 00:07:13.485 START TEST raid_write_error_test 00:07:13.485 ************************************ 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.10McAkdl9S 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73057 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73057 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73057 ']' 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.485 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.486 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.486 04:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.745 [2024-11-21 04:53:30.245666] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:13.745 [2024-11-21 04:53:30.245811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73057 ] 00:07:13.745 [2024-11-21 04:53:30.399431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.745 [2024-11-21 04:53:30.437117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.005 [2024-11-21 04:53:30.512888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.005 [2024-11-21 04:53:30.512934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 BaseBdev1_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 true 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 [2024-11-21 04:53:31.115950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:14.576 [2024-11-21 04:53:31.116041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.576 [2024-11-21 04:53:31.116071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:14.576 [2024-11-21 04:53:31.116081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.576 [2024-11-21 04:53:31.118705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.576 [2024-11-21 04:53:31.118743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:14.576 BaseBdev1 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 BaseBdev2_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 true 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 [2024-11-21 04:53:31.155490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:14.576 [2024-11-21 04:53:31.155550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.576 [2024-11-21 04:53:31.155574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:14.576 [2024-11-21 04:53:31.155583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.576 [2024-11-21 04:53:31.158133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.576 [2024-11-21 04:53:31.158168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:14.576 BaseBdev2 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:14.576 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 [2024-11-21 04:53:31.163556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.577 [2024-11-21 04:53:31.165716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.577 [2024-11-21 04:53:31.165909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:14.577 [2024-11-21 04:53:31.165923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.577 [2024-11-21 04:53:31.166262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:14.577 [2024-11-21 04:53:31.166446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:14.577 [2024-11-21 04:53:31.166466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:14.577 [2024-11-21 04:53:31.166653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.577 "name": "raid_bdev1", 00:07:14.577 "uuid": "baab3e5a-dcc0-4dde-b650-6ed666ac76bf", 00:07:14.577 "strip_size_kb": 64, 00:07:14.577 "state": "online", 00:07:14.577 "raid_level": "raid0", 00:07:14.577 "superblock": true, 00:07:14.577 "num_base_bdevs": 2, 00:07:14.577 "num_base_bdevs_discovered": 2, 00:07:14.577 "num_base_bdevs_operational": 2, 00:07:14.577 "base_bdevs_list": [ 00:07:14.577 { 00:07:14.577 "name": "BaseBdev1", 00:07:14.577 "uuid": "4e2f2ef6-431c-5dbe-9424-7ad0a3de3688", 00:07:14.577 "is_configured": true, 00:07:14.577 "data_offset": 2048, 00:07:14.577 "data_size": 63488 00:07:14.577 }, 00:07:14.577 { 00:07:14.577 "name": "BaseBdev2", 00:07:14.577 "uuid": "668689c5-ad14-5255-ad40-130b9a981b5a", 00:07:14.577 "is_configured": true, 00:07:14.577 "data_offset": 2048, 00:07:14.577 "data_size": 63488 00:07:14.577 } 00:07:14.577 ] 00:07:14.577 }' 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.577 04:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:15.148 04:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:15.148 [2024-11-21 04:53:31.667221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.087 "name": "raid_bdev1", 00:07:16.087 "uuid": "baab3e5a-dcc0-4dde-b650-6ed666ac76bf", 00:07:16.087 "strip_size_kb": 64, 00:07:16.087 "state": "online", 00:07:16.087 "raid_level": "raid0", 00:07:16.087 "superblock": true, 00:07:16.087 "num_base_bdevs": 2, 00:07:16.087 "num_base_bdevs_discovered": 2, 00:07:16.087 "num_base_bdevs_operational": 2, 00:07:16.087 "base_bdevs_list": [ 00:07:16.087 { 00:07:16.087 "name": "BaseBdev1", 00:07:16.087 "uuid": "4e2f2ef6-431c-5dbe-9424-7ad0a3de3688", 00:07:16.087 "is_configured": true, 00:07:16.087 "data_offset": 2048, 00:07:16.087 "data_size": 63488 00:07:16.087 }, 00:07:16.087 { 00:07:16.087 "name": "BaseBdev2", 00:07:16.087 "uuid": "668689c5-ad14-5255-ad40-130b9a981b5a", 00:07:16.087 "is_configured": true, 00:07:16.087 "data_offset": 2048, 00:07:16.087 "data_size": 63488 00:07:16.087 } 00:07:16.087 ] 00:07:16.087 }' 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.087 04:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.347 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.347 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 [2024-11-21 04:53:33.031569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.347 [2024-11-21 04:53:33.031620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.347 [2024-11-21 04:53:33.034194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.347 [2024-11-21 04:53:33.034242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.347 [2024-11-21 04:53:33.034302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.347 [2024-11-21 04:53:33.034317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:16.347 { 00:07:16.347 "results": [ 00:07:16.347 { 00:07:16.347 "job": "raid_bdev1", 00:07:16.347 "core_mask": "0x1", 00:07:16.347 "workload": "randrw", 00:07:16.347 "percentage": 50, 00:07:16.347 "status": "finished", 00:07:16.347 "queue_depth": 1, 00:07:16.347 "io_size": 131072, 00:07:16.347 "runtime": 1.36481, 00:07:16.347 "iops": 14627.67711256512, 00:07:16.347 "mibps": 1828.45963907064, 00:07:16.347 "io_failed": 1, 00:07:16.347 "io_timeout": 0, 00:07:16.347 "avg_latency_us": 96.09108271352596, 00:07:16.347 "min_latency_us": 25.3764192139738, 00:07:16.347 "max_latency_us": 1409.4532751091704 00:07:16.347 } 00:07:16.347 ], 00:07:16.347 "core_count": 1 00:07:16.347 } 00:07:16.347 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.347 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73057 00:07:16.348 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73057 ']' 00:07:16.348 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73057 00:07:16.348 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:16.348 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.348 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73057 00:07:16.608 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.608 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.608 killing process with pid 73057 00:07:16.608 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73057' 00:07:16.608 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73057 00:07:16.608 [2024-11-21 04:53:33.083377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.608 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73057 00:07:16.608 [2024-11-21 04:53:33.113344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.10McAkdl9S 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.867 04:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:16.867 00:07:16.867 real 0m3.310s 00:07:16.867 user 0m4.070s 00:07:16.868 sys 0m0.597s 00:07:16.868 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.868 04:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.868 ************************************ 00:07:16.868 END TEST raid_write_error_test 00:07:16.868 ************************************ 00:07:16.868 04:53:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:16.868 04:53:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:16.868 04:53:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:16.868 04:53:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.868 04:53:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.868 ************************************ 00:07:16.868 START TEST raid_state_function_test 00:07:16.868 ************************************ 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73184 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.868 Process raid pid: 73184 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73184' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73184 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73184 ']' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.868 04:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.128 [2024-11-21 04:53:33.607535] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:17.128 [2024-11-21 04:53:33.607660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.128 [2024-11-21 04:53:33.780337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.128 [2024-11-21 04:53:33.824925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.388 [2024-11-21 04:53:33.900957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.388 [2024-11-21 04:53:33.901007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.957 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.957 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.957 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.957 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.958 [2024-11-21 04:53:34.436929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.958 [2024-11-21 04:53:34.436997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.958 [2024-11-21 04:53:34.437016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.958 [2024-11-21 04:53:34.437028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.958 "name": "Existed_Raid", 00:07:17.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.958 "strip_size_kb": 64, 00:07:17.958 "state": "configuring", 00:07:17.958 "raid_level": "concat", 00:07:17.958 "superblock": false, 00:07:17.958 "num_base_bdevs": 2, 00:07:17.958 "num_base_bdevs_discovered": 0, 00:07:17.958 "num_base_bdevs_operational": 2, 00:07:17.958 "base_bdevs_list": [ 00:07:17.958 { 00:07:17.958 "name": "BaseBdev1", 00:07:17.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.958 "is_configured": false, 00:07:17.958 "data_offset": 0, 00:07:17.958 "data_size": 0 00:07:17.958 }, 00:07:17.958 { 00:07:17.958 "name": "BaseBdev2", 00:07:17.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.958 "is_configured": false, 00:07:17.958 "data_offset": 0, 00:07:17.958 "data_size": 0 00:07:17.958 } 00:07:17.958 ] 00:07:17.958 }' 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.958 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.217 [2024-11-21 04:53:34.912069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.217 [2024-11-21 04:53:34.912152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.217 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 [2024-11-21 04:53:34.923991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.218 [2024-11-21 04:53:34.924032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.218 [2024-11-21 04:53:34.924042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.218 [2024-11-21 04:53:34.924054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.218 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.218 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:18.218 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.218 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 [2024-11-21 04:53:34.950928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.477 BaseBdev1 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 [ 00:07:18.477 { 00:07:18.477 "name": "BaseBdev1", 00:07:18.477 "aliases": [ 00:07:18.477 "c9715bbb-3f53-406a-a376-464060667226" 00:07:18.477 ], 00:07:18.477 "product_name": "Malloc disk", 00:07:18.477 "block_size": 512, 00:07:18.477 "num_blocks": 65536, 00:07:18.477 "uuid": "c9715bbb-3f53-406a-a376-464060667226", 00:07:18.477 "assigned_rate_limits": { 00:07:18.477 "rw_ios_per_sec": 0, 00:07:18.477 "rw_mbytes_per_sec": 0, 00:07:18.477 "r_mbytes_per_sec": 0, 00:07:18.477 "w_mbytes_per_sec": 0 00:07:18.477 }, 00:07:18.477 "claimed": true, 00:07:18.477 "claim_type": "exclusive_write", 00:07:18.477 "zoned": false, 00:07:18.477 "supported_io_types": { 00:07:18.477 "read": true, 00:07:18.477 "write": true, 00:07:18.477 "unmap": true, 00:07:18.477 "flush": true, 00:07:18.477 "reset": true, 00:07:18.477 "nvme_admin": false, 00:07:18.477 "nvme_io": false, 00:07:18.477 "nvme_io_md": false, 00:07:18.477 "write_zeroes": true, 00:07:18.477 "zcopy": true, 00:07:18.477 "get_zone_info": false, 00:07:18.477 "zone_management": false, 00:07:18.477 "zone_append": false, 00:07:18.477 "compare": false, 00:07:18.477 "compare_and_write": false, 00:07:18.477 "abort": true, 00:07:18.477 "seek_hole": false, 00:07:18.477 "seek_data": false, 00:07:18.477 "copy": true, 00:07:18.477 "nvme_iov_md": false 00:07:18.477 }, 00:07:18.477 "memory_domains": [ 00:07:18.477 { 00:07:18.477 "dma_device_id": "system", 00:07:18.477 "dma_device_type": 1 00:07:18.477 }, 00:07:18.477 { 00:07:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.477 "dma_device_type": 2 00:07:18.477 } 00:07:18.477 ], 00:07:18.477 "driver_specific": {} 00:07:18.477 } 00:07:18.477 ] 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.477 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.478 04:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.478 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.478 04:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.478 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.478 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.478 "name": "Existed_Raid", 00:07:18.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.478 "strip_size_kb": 64, 00:07:18.478 "state": "configuring", 00:07:18.478 "raid_level": "concat", 00:07:18.478 "superblock": false, 00:07:18.478 "num_base_bdevs": 2, 00:07:18.478 "num_base_bdevs_discovered": 1, 00:07:18.478 "num_base_bdevs_operational": 2, 00:07:18.478 "base_bdevs_list": [ 00:07:18.478 { 00:07:18.478 "name": "BaseBdev1", 00:07:18.478 "uuid": "c9715bbb-3f53-406a-a376-464060667226", 00:07:18.478 "is_configured": true, 00:07:18.478 "data_offset": 0, 00:07:18.478 "data_size": 65536 00:07:18.478 }, 00:07:18.478 { 00:07:18.478 "name": "BaseBdev2", 00:07:18.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.478 "is_configured": false, 00:07:18.478 "data_offset": 0, 00:07:18.478 "data_size": 0 00:07:18.478 } 00:07:18.478 ] 00:07:18.478 }' 00:07:18.478 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.478 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.738 [2024-11-21 04:53:35.442160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.738 [2024-11-21 04:53:35.442241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.738 [2024-11-21 04:53:35.450143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.738 [2024-11-21 04:53:35.452391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.738 [2024-11-21 04:53:35.452431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.738 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.998 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.998 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.998 "name": "Existed_Raid", 00:07:18.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.998 "strip_size_kb": 64, 00:07:18.998 "state": "configuring", 00:07:18.998 "raid_level": "concat", 00:07:18.998 "superblock": false, 00:07:18.998 "num_base_bdevs": 2, 00:07:18.998 "num_base_bdevs_discovered": 1, 00:07:18.998 "num_base_bdevs_operational": 2, 00:07:18.998 "base_bdevs_list": [ 00:07:18.998 { 00:07:18.998 "name": "BaseBdev1", 00:07:18.998 "uuid": "c9715bbb-3f53-406a-a376-464060667226", 00:07:18.998 "is_configured": true, 00:07:18.998 "data_offset": 0, 00:07:18.998 "data_size": 65536 00:07:18.998 }, 00:07:18.998 { 00:07:18.998 "name": "BaseBdev2", 00:07:18.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.998 "is_configured": false, 00:07:18.998 "data_offset": 0, 00:07:18.998 "data_size": 0 00:07:18.998 } 00:07:18.998 ] 00:07:18.998 }' 00:07:18.998 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.998 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 [2024-11-21 04:53:35.914173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.257 [2024-11-21 04:53:35.914232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:19.257 [2024-11-21 04:53:35.914241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.257 [2024-11-21 04:53:35.914575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:19.257 [2024-11-21 04:53:35.914831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:19.257 [2024-11-21 04:53:35.914868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:19.257 [2024-11-21 04:53:35.915138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.257 BaseBdev2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 [ 00:07:19.257 { 00:07:19.257 "name": "BaseBdev2", 00:07:19.257 "aliases": [ 00:07:19.257 "c1b9b880-94af-46a9-83f9-5c4aaae03bf1" 00:07:19.257 ], 00:07:19.257 "product_name": "Malloc disk", 00:07:19.257 "block_size": 512, 00:07:19.257 "num_blocks": 65536, 00:07:19.257 "uuid": "c1b9b880-94af-46a9-83f9-5c4aaae03bf1", 00:07:19.257 "assigned_rate_limits": { 00:07:19.257 "rw_ios_per_sec": 0, 00:07:19.257 "rw_mbytes_per_sec": 0, 00:07:19.257 "r_mbytes_per_sec": 0, 00:07:19.257 "w_mbytes_per_sec": 0 00:07:19.257 }, 00:07:19.257 "claimed": true, 00:07:19.257 "claim_type": "exclusive_write", 00:07:19.257 "zoned": false, 00:07:19.257 "supported_io_types": { 00:07:19.257 "read": true, 00:07:19.257 "write": true, 00:07:19.257 "unmap": true, 00:07:19.257 "flush": true, 00:07:19.257 "reset": true, 00:07:19.257 "nvme_admin": false, 00:07:19.257 "nvme_io": false, 00:07:19.257 "nvme_io_md": false, 00:07:19.257 "write_zeroes": true, 00:07:19.257 "zcopy": true, 00:07:19.257 "get_zone_info": false, 00:07:19.257 "zone_management": false, 00:07:19.257 "zone_append": false, 00:07:19.257 "compare": false, 00:07:19.257 "compare_and_write": false, 00:07:19.257 "abort": true, 00:07:19.257 "seek_hole": false, 00:07:19.257 "seek_data": false, 00:07:19.257 "copy": true, 00:07:19.257 "nvme_iov_md": false 00:07:19.257 }, 00:07:19.257 "memory_domains": [ 00:07:19.257 { 00:07:19.257 "dma_device_id": "system", 00:07:19.257 "dma_device_type": 1 00:07:19.257 }, 00:07:19.257 { 00:07:19.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.257 "dma_device_type": 2 00:07:19.257 } 00:07:19.257 ], 00:07:19.257 "driver_specific": {} 00:07:19.257 } 00:07:19.257 ] 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.257 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.517 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.517 "name": "Existed_Raid", 00:07:19.517 "uuid": "0005d03f-9b7c-411e-944d-56b7b0ef1ccd", 00:07:19.517 "strip_size_kb": 64, 00:07:19.517 "state": "online", 00:07:19.517 "raid_level": "concat", 00:07:19.517 "superblock": false, 00:07:19.517 "num_base_bdevs": 2, 00:07:19.518 "num_base_bdevs_discovered": 2, 00:07:19.518 "num_base_bdevs_operational": 2, 00:07:19.518 "base_bdevs_list": [ 00:07:19.518 { 00:07:19.518 "name": "BaseBdev1", 00:07:19.518 "uuid": "c9715bbb-3f53-406a-a376-464060667226", 00:07:19.518 "is_configured": true, 00:07:19.518 "data_offset": 0, 00:07:19.518 "data_size": 65536 00:07:19.518 }, 00:07:19.518 { 00:07:19.518 "name": "BaseBdev2", 00:07:19.518 "uuid": "c1b9b880-94af-46a9-83f9-5c4aaae03bf1", 00:07:19.518 "is_configured": true, 00:07:19.518 "data_offset": 0, 00:07:19.518 "data_size": 65536 00:07:19.518 } 00:07:19.518 ] 00:07:19.518 }' 00:07:19.518 04:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.518 04:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.778 [2024-11-21 04:53:36.377809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.778 "name": "Existed_Raid", 00:07:19.778 "aliases": [ 00:07:19.778 "0005d03f-9b7c-411e-944d-56b7b0ef1ccd" 00:07:19.778 ], 00:07:19.778 "product_name": "Raid Volume", 00:07:19.778 "block_size": 512, 00:07:19.778 "num_blocks": 131072, 00:07:19.778 "uuid": "0005d03f-9b7c-411e-944d-56b7b0ef1ccd", 00:07:19.778 "assigned_rate_limits": { 00:07:19.778 "rw_ios_per_sec": 0, 00:07:19.778 "rw_mbytes_per_sec": 0, 00:07:19.778 "r_mbytes_per_sec": 0, 00:07:19.778 "w_mbytes_per_sec": 0 00:07:19.778 }, 00:07:19.778 "claimed": false, 00:07:19.778 "zoned": false, 00:07:19.778 "supported_io_types": { 00:07:19.778 "read": true, 00:07:19.778 "write": true, 00:07:19.778 "unmap": true, 00:07:19.778 "flush": true, 00:07:19.778 "reset": true, 00:07:19.778 "nvme_admin": false, 00:07:19.778 "nvme_io": false, 00:07:19.778 "nvme_io_md": false, 00:07:19.778 "write_zeroes": true, 00:07:19.778 "zcopy": false, 00:07:19.778 "get_zone_info": false, 00:07:19.778 "zone_management": false, 00:07:19.778 "zone_append": false, 00:07:19.778 "compare": false, 00:07:19.778 "compare_and_write": false, 00:07:19.778 "abort": false, 00:07:19.778 "seek_hole": false, 00:07:19.778 "seek_data": false, 00:07:19.778 "copy": false, 00:07:19.778 "nvme_iov_md": false 00:07:19.778 }, 00:07:19.778 "memory_domains": [ 00:07:19.778 { 00:07:19.778 "dma_device_id": "system", 00:07:19.778 "dma_device_type": 1 00:07:19.778 }, 00:07:19.778 { 00:07:19.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.778 "dma_device_type": 2 00:07:19.778 }, 00:07:19.778 { 00:07:19.778 "dma_device_id": "system", 00:07:19.778 "dma_device_type": 1 00:07:19.778 }, 00:07:19.778 { 00:07:19.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.778 "dma_device_type": 2 00:07:19.778 } 00:07:19.778 ], 00:07:19.778 "driver_specific": { 00:07:19.778 "raid": { 00:07:19.778 "uuid": "0005d03f-9b7c-411e-944d-56b7b0ef1ccd", 00:07:19.778 "strip_size_kb": 64, 00:07:19.778 "state": "online", 00:07:19.778 "raid_level": "concat", 00:07:19.778 "superblock": false, 00:07:19.778 "num_base_bdevs": 2, 00:07:19.778 "num_base_bdevs_discovered": 2, 00:07:19.778 "num_base_bdevs_operational": 2, 00:07:19.778 "base_bdevs_list": [ 00:07:19.778 { 00:07:19.778 "name": "BaseBdev1", 00:07:19.778 "uuid": "c9715bbb-3f53-406a-a376-464060667226", 00:07:19.778 "is_configured": true, 00:07:19.778 "data_offset": 0, 00:07:19.778 "data_size": 65536 00:07:19.778 }, 00:07:19.778 { 00:07:19.778 "name": "BaseBdev2", 00:07:19.778 "uuid": "c1b9b880-94af-46a9-83f9-5c4aaae03bf1", 00:07:19.778 "is_configured": true, 00:07:19.778 "data_offset": 0, 00:07:19.778 "data_size": 65536 00:07:19.778 } 00:07:19.778 ] 00:07:19.778 } 00:07:19.778 } 00:07:19.778 }' 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:19.778 BaseBdev2' 00:07:19.778 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.779 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.039 [2024-11-21 04:53:36.601195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.039 [2024-11-21 04:53:36.601233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.039 [2024-11-21 04:53:36.601304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.039 "name": "Existed_Raid", 00:07:20.039 "uuid": "0005d03f-9b7c-411e-944d-56b7b0ef1ccd", 00:07:20.039 "strip_size_kb": 64, 00:07:20.039 "state": "offline", 00:07:20.039 "raid_level": "concat", 00:07:20.039 "superblock": false, 00:07:20.039 "num_base_bdevs": 2, 00:07:20.039 "num_base_bdevs_discovered": 1, 00:07:20.039 "num_base_bdevs_operational": 1, 00:07:20.039 "base_bdevs_list": [ 00:07:20.039 { 00:07:20.039 "name": null, 00:07:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.039 "is_configured": false, 00:07:20.039 "data_offset": 0, 00:07:20.039 "data_size": 65536 00:07:20.039 }, 00:07:20.039 { 00:07:20.039 "name": "BaseBdev2", 00:07:20.039 "uuid": "c1b9b880-94af-46a9-83f9-5c4aaae03bf1", 00:07:20.039 "is_configured": true, 00:07:20.039 "data_offset": 0, 00:07:20.039 "data_size": 65536 00:07:20.039 } 00:07:20.039 ] 00:07:20.039 }' 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.039 04:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.609 [2024-11-21 04:53:37.092785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:20.609 [2024-11-21 04:53:37.092870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73184 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73184 ']' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73184 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73184 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.609 killing process with pid 73184 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73184' 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73184 00:07:20.609 [2024-11-21 04:53:37.196884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.609 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73184 00:07:20.609 [2024-11-21 04:53:37.198503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:20.869 00:07:20.869 real 0m4.010s 00:07:20.869 user 0m6.182s 00:07:20.869 sys 0m0.856s 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.869 ************************************ 00:07:20.869 END TEST raid_state_function_test 00:07:20.869 ************************************ 00:07:20.869 04:53:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:20.869 04:53:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:20.869 04:53:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.869 04:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.869 ************************************ 00:07:20.869 START TEST raid_state_function_test_sb 00:07:20.869 ************************************ 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.869 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.129 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73426 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73426' 00:07:21.130 Process raid pid: 73426 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73426 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73426 ']' 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.130 04:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.130 [2024-11-21 04:53:37.687363] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:21.130 [2024-11-21 04:53:37.687493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.130 [2024-11-21 04:53:37.860575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.389 [2024-11-21 04:53:37.901375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.389 [2024-11-21 04:53:37.977107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.389 [2024-11-21 04:53:37.977151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.963 [2024-11-21 04:53:38.531984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.963 [2024-11-21 04:53:38.532041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.963 [2024-11-21 04:53:38.532052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.963 [2024-11-21 04:53:38.532062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.963 "name": "Existed_Raid", 00:07:21.963 "uuid": "64c7332b-bdc3-4cec-a69c-b1f241fa9e06", 00:07:21.963 "strip_size_kb": 64, 00:07:21.963 "state": "configuring", 00:07:21.963 "raid_level": "concat", 00:07:21.963 "superblock": true, 00:07:21.963 "num_base_bdevs": 2, 00:07:21.963 "num_base_bdevs_discovered": 0, 00:07:21.963 "num_base_bdevs_operational": 2, 00:07:21.963 "base_bdevs_list": [ 00:07:21.963 { 00:07:21.963 "name": "BaseBdev1", 00:07:21.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.963 "is_configured": false, 00:07:21.963 "data_offset": 0, 00:07:21.963 "data_size": 0 00:07:21.963 }, 00:07:21.963 { 00:07:21.963 "name": "BaseBdev2", 00:07:21.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.963 "is_configured": false, 00:07:21.963 "data_offset": 0, 00:07:21.963 "data_size": 0 00:07:21.963 } 00:07:21.963 ] 00:07:21.963 }' 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.963 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 [2024-11-21 04:53:38.979178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.533 [2024-11-21 04:53:38.979244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 [2024-11-21 04:53:38.991131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.533 [2024-11-21 04:53:38.991178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.533 [2024-11-21 04:53:38.991186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.533 [2024-11-21 04:53:38.991198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.533 04:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 [2024-11-21 04:53:39.018039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.533 BaseBdev1 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 [ 00:07:22.533 { 00:07:22.533 "name": "BaseBdev1", 00:07:22.533 "aliases": [ 00:07:22.533 "a108c24b-ffb3-480e-acf7-66e5f72bb4c0" 00:07:22.533 ], 00:07:22.533 "product_name": "Malloc disk", 00:07:22.533 "block_size": 512, 00:07:22.533 "num_blocks": 65536, 00:07:22.533 "uuid": "a108c24b-ffb3-480e-acf7-66e5f72bb4c0", 00:07:22.533 "assigned_rate_limits": { 00:07:22.533 "rw_ios_per_sec": 0, 00:07:22.533 "rw_mbytes_per_sec": 0, 00:07:22.533 "r_mbytes_per_sec": 0, 00:07:22.533 "w_mbytes_per_sec": 0 00:07:22.533 }, 00:07:22.533 "claimed": true, 00:07:22.533 "claim_type": "exclusive_write", 00:07:22.533 "zoned": false, 00:07:22.533 "supported_io_types": { 00:07:22.533 "read": true, 00:07:22.533 "write": true, 00:07:22.533 "unmap": true, 00:07:22.533 "flush": true, 00:07:22.533 "reset": true, 00:07:22.533 "nvme_admin": false, 00:07:22.533 "nvme_io": false, 00:07:22.533 "nvme_io_md": false, 00:07:22.533 "write_zeroes": true, 00:07:22.533 "zcopy": true, 00:07:22.533 "get_zone_info": false, 00:07:22.533 "zone_management": false, 00:07:22.533 "zone_append": false, 00:07:22.533 "compare": false, 00:07:22.533 "compare_and_write": false, 00:07:22.533 "abort": true, 00:07:22.533 "seek_hole": false, 00:07:22.533 "seek_data": false, 00:07:22.533 "copy": true, 00:07:22.533 "nvme_iov_md": false 00:07:22.533 }, 00:07:22.533 "memory_domains": [ 00:07:22.533 { 00:07:22.533 "dma_device_id": "system", 00:07:22.533 "dma_device_type": 1 00:07:22.533 }, 00:07:22.533 { 00:07:22.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.533 "dma_device_type": 2 00:07:22.533 } 00:07:22.533 ], 00:07:22.533 "driver_specific": {} 00:07:22.533 } 00:07:22.533 ] 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.533 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.534 "name": "Existed_Raid", 00:07:22.534 "uuid": "c0fccd6b-c773-419d-9679-b817bf6a2b36", 00:07:22.534 "strip_size_kb": 64, 00:07:22.534 "state": "configuring", 00:07:22.534 "raid_level": "concat", 00:07:22.534 "superblock": true, 00:07:22.534 "num_base_bdevs": 2, 00:07:22.534 "num_base_bdevs_discovered": 1, 00:07:22.534 "num_base_bdevs_operational": 2, 00:07:22.534 "base_bdevs_list": [ 00:07:22.534 { 00:07:22.534 "name": "BaseBdev1", 00:07:22.534 "uuid": "a108c24b-ffb3-480e-acf7-66e5f72bb4c0", 00:07:22.534 "is_configured": true, 00:07:22.534 "data_offset": 2048, 00:07:22.534 "data_size": 63488 00:07:22.534 }, 00:07:22.534 { 00:07:22.534 "name": "BaseBdev2", 00:07:22.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.534 "is_configured": false, 00:07:22.534 "data_offset": 0, 00:07:22.534 "data_size": 0 00:07:22.534 } 00:07:22.534 ] 00:07:22.534 }' 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.534 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.793 [2024-11-21 04:53:39.509243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.793 [2024-11-21 04:53:39.509312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.793 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.793 [2024-11-21 04:53:39.521253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.793 [2024-11-21 04:53:39.523503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.793 [2024-11-21 04:53:39.523540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.052 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.052 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.052 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.052 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.053 "name": "Existed_Raid", 00:07:23.053 "uuid": "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc", 00:07:23.053 "strip_size_kb": 64, 00:07:23.053 "state": "configuring", 00:07:23.053 "raid_level": "concat", 00:07:23.053 "superblock": true, 00:07:23.053 "num_base_bdevs": 2, 00:07:23.053 "num_base_bdevs_discovered": 1, 00:07:23.053 "num_base_bdevs_operational": 2, 00:07:23.053 "base_bdevs_list": [ 00:07:23.053 { 00:07:23.053 "name": "BaseBdev1", 00:07:23.053 "uuid": "a108c24b-ffb3-480e-acf7-66e5f72bb4c0", 00:07:23.053 "is_configured": true, 00:07:23.053 "data_offset": 2048, 00:07:23.053 "data_size": 63488 00:07:23.053 }, 00:07:23.053 { 00:07:23.053 "name": "BaseBdev2", 00:07:23.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.053 "is_configured": false, 00:07:23.053 "data_offset": 0, 00:07:23.053 "data_size": 0 00:07:23.053 } 00:07:23.053 ] 00:07:23.053 }' 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.053 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.312 [2024-11-21 04:53:39.941194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.312 [2024-11-21 04:53:39.941390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:23.312 [2024-11-21 04:53:39.941424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.312 [2024-11-21 04:53:39.941786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:23.312 [2024-11-21 04:53:39.941952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:23.312 [2024-11-21 04:53:39.941976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:23.312 BaseBdev2 00:07:23.312 [2024-11-21 04:53:39.942149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.312 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.312 [ 00:07:23.312 { 00:07:23.312 "name": "BaseBdev2", 00:07:23.312 "aliases": [ 00:07:23.312 "e3f2008f-661e-4e05-864e-0f145d49e2aa" 00:07:23.312 ], 00:07:23.312 "product_name": "Malloc disk", 00:07:23.312 "block_size": 512, 00:07:23.312 "num_blocks": 65536, 00:07:23.312 "uuid": "e3f2008f-661e-4e05-864e-0f145d49e2aa", 00:07:23.312 "assigned_rate_limits": { 00:07:23.312 "rw_ios_per_sec": 0, 00:07:23.312 "rw_mbytes_per_sec": 0, 00:07:23.313 "r_mbytes_per_sec": 0, 00:07:23.313 "w_mbytes_per_sec": 0 00:07:23.313 }, 00:07:23.313 "claimed": true, 00:07:23.313 "claim_type": "exclusive_write", 00:07:23.313 "zoned": false, 00:07:23.313 "supported_io_types": { 00:07:23.313 "read": true, 00:07:23.313 "write": true, 00:07:23.313 "unmap": true, 00:07:23.313 "flush": true, 00:07:23.313 "reset": true, 00:07:23.313 "nvme_admin": false, 00:07:23.313 "nvme_io": false, 00:07:23.313 "nvme_io_md": false, 00:07:23.313 "write_zeroes": true, 00:07:23.313 "zcopy": true, 00:07:23.313 "get_zone_info": false, 00:07:23.313 "zone_management": false, 00:07:23.313 "zone_append": false, 00:07:23.313 "compare": false, 00:07:23.313 "compare_and_write": false, 00:07:23.313 "abort": true, 00:07:23.313 "seek_hole": false, 00:07:23.313 "seek_data": false, 00:07:23.313 "copy": true, 00:07:23.313 "nvme_iov_md": false 00:07:23.313 }, 00:07:23.313 "memory_domains": [ 00:07:23.313 { 00:07:23.313 "dma_device_id": "system", 00:07:23.313 "dma_device_type": 1 00:07:23.313 }, 00:07:23.313 { 00:07:23.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.313 "dma_device_type": 2 00:07:23.313 } 00:07:23.313 ], 00:07:23.313 "driver_specific": {} 00:07:23.313 } 00:07:23.313 ] 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.313 04:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.313 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.313 "name": "Existed_Raid", 00:07:23.313 "uuid": "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc", 00:07:23.313 "strip_size_kb": 64, 00:07:23.313 "state": "online", 00:07:23.313 "raid_level": "concat", 00:07:23.313 "superblock": true, 00:07:23.313 "num_base_bdevs": 2, 00:07:23.313 "num_base_bdevs_discovered": 2, 00:07:23.313 "num_base_bdevs_operational": 2, 00:07:23.313 "base_bdevs_list": [ 00:07:23.313 { 00:07:23.313 "name": "BaseBdev1", 00:07:23.313 "uuid": "a108c24b-ffb3-480e-acf7-66e5f72bb4c0", 00:07:23.313 "is_configured": true, 00:07:23.313 "data_offset": 2048, 00:07:23.313 "data_size": 63488 00:07:23.313 }, 00:07:23.313 { 00:07:23.313 "name": "BaseBdev2", 00:07:23.313 "uuid": "e3f2008f-661e-4e05-864e-0f145d49e2aa", 00:07:23.313 "is_configured": true, 00:07:23.313 "data_offset": 2048, 00:07:23.313 "data_size": 63488 00:07:23.313 } 00:07:23.313 ] 00:07:23.313 }' 00:07:23.313 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.313 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.880 [2024-11-21 04:53:40.444810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.880 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.880 "name": "Existed_Raid", 00:07:23.880 "aliases": [ 00:07:23.880 "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc" 00:07:23.880 ], 00:07:23.880 "product_name": "Raid Volume", 00:07:23.880 "block_size": 512, 00:07:23.880 "num_blocks": 126976, 00:07:23.880 "uuid": "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc", 00:07:23.880 "assigned_rate_limits": { 00:07:23.880 "rw_ios_per_sec": 0, 00:07:23.880 "rw_mbytes_per_sec": 0, 00:07:23.880 "r_mbytes_per_sec": 0, 00:07:23.880 "w_mbytes_per_sec": 0 00:07:23.880 }, 00:07:23.880 "claimed": false, 00:07:23.880 "zoned": false, 00:07:23.880 "supported_io_types": { 00:07:23.880 "read": true, 00:07:23.880 "write": true, 00:07:23.880 "unmap": true, 00:07:23.880 "flush": true, 00:07:23.880 "reset": true, 00:07:23.880 "nvme_admin": false, 00:07:23.880 "nvme_io": false, 00:07:23.880 "nvme_io_md": false, 00:07:23.880 "write_zeroes": true, 00:07:23.880 "zcopy": false, 00:07:23.880 "get_zone_info": false, 00:07:23.880 "zone_management": false, 00:07:23.880 "zone_append": false, 00:07:23.880 "compare": false, 00:07:23.880 "compare_and_write": false, 00:07:23.880 "abort": false, 00:07:23.880 "seek_hole": false, 00:07:23.880 "seek_data": false, 00:07:23.880 "copy": false, 00:07:23.880 "nvme_iov_md": false 00:07:23.880 }, 00:07:23.880 "memory_domains": [ 00:07:23.880 { 00:07:23.880 "dma_device_id": "system", 00:07:23.880 "dma_device_type": 1 00:07:23.880 }, 00:07:23.880 { 00:07:23.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.880 "dma_device_type": 2 00:07:23.880 }, 00:07:23.880 { 00:07:23.880 "dma_device_id": "system", 00:07:23.880 "dma_device_type": 1 00:07:23.880 }, 00:07:23.880 { 00:07:23.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.880 "dma_device_type": 2 00:07:23.880 } 00:07:23.880 ], 00:07:23.880 "driver_specific": { 00:07:23.880 "raid": { 00:07:23.880 "uuid": "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc", 00:07:23.880 "strip_size_kb": 64, 00:07:23.880 "state": "online", 00:07:23.880 "raid_level": "concat", 00:07:23.880 "superblock": true, 00:07:23.880 "num_base_bdevs": 2, 00:07:23.880 "num_base_bdevs_discovered": 2, 00:07:23.880 "num_base_bdevs_operational": 2, 00:07:23.880 "base_bdevs_list": [ 00:07:23.880 { 00:07:23.880 "name": "BaseBdev1", 00:07:23.880 "uuid": "a108c24b-ffb3-480e-acf7-66e5f72bb4c0", 00:07:23.880 "is_configured": true, 00:07:23.880 "data_offset": 2048, 00:07:23.880 "data_size": 63488 00:07:23.880 }, 00:07:23.880 { 00:07:23.880 "name": "BaseBdev2", 00:07:23.880 "uuid": "e3f2008f-661e-4e05-864e-0f145d49e2aa", 00:07:23.880 "is_configured": true, 00:07:23.880 "data_offset": 2048, 00:07:23.880 "data_size": 63488 00:07:23.880 } 00:07:23.880 ] 00:07:23.880 } 00:07:23.880 } 00:07:23.880 }' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.881 BaseBdev2' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.141 [2024-11-21 04:53:40.656220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.141 [2024-11-21 04:53:40.656262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.141 [2024-11-21 04:53:40.656323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.141 "name": "Existed_Raid", 00:07:24.141 "uuid": "3498eed5-c2b5-42e6-b3fe-afcc64b3efdc", 00:07:24.141 "strip_size_kb": 64, 00:07:24.141 "state": "offline", 00:07:24.141 "raid_level": "concat", 00:07:24.141 "superblock": true, 00:07:24.141 "num_base_bdevs": 2, 00:07:24.141 "num_base_bdevs_discovered": 1, 00:07:24.141 "num_base_bdevs_operational": 1, 00:07:24.141 "base_bdevs_list": [ 00:07:24.141 { 00:07:24.141 "name": null, 00:07:24.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.141 "is_configured": false, 00:07:24.141 "data_offset": 0, 00:07:24.141 "data_size": 63488 00:07:24.141 }, 00:07:24.141 { 00:07:24.141 "name": "BaseBdev2", 00:07:24.141 "uuid": "e3f2008f-661e-4e05-864e-0f145d49e2aa", 00:07:24.141 "is_configured": true, 00:07:24.141 "data_offset": 2048, 00:07:24.141 "data_size": 63488 00:07:24.141 } 00:07:24.141 ] 00:07:24.141 }' 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.141 04:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.708 [2024-11-21 04:53:41.204335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.708 [2024-11-21 04:53:41.204420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.708 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73426 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73426 ']' 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73426 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73426 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.709 killing process with pid 73426 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73426' 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73426 00:07:24.709 [2024-11-21 04:53:41.310911] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.709 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73426 00:07:24.709 [2024-11-21 04:53:41.312597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.968 04:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.968 00:07:24.968 real 0m4.042s 00:07:24.968 user 0m6.228s 00:07:24.968 sys 0m0.870s 00:07:24.968 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.968 04:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.968 ************************************ 00:07:24.968 END TEST raid_state_function_test_sb 00:07:24.968 ************************************ 00:07:24.968 04:53:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:24.968 04:53:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:24.968 04:53:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.968 04:53:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.229 ************************************ 00:07:25.229 START TEST raid_superblock_test 00:07:25.229 ************************************ 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73666 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73666 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73666 ']' 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.229 04:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.229 [2024-11-21 04:53:41.800481] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:25.229 [2024-11-21 04:53:41.800681] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73666 ] 00:07:25.229 [2024-11-21 04:53:41.954850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.488 [2024-11-21 04:53:41.993043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.488 [2024-11-21 04:53:42.069517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.488 [2024-11-21 04:53:42.069567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:26.056 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 malloc1 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 [2024-11-21 04:53:42.655700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:26.057 [2024-11-21 04:53:42.655884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.057 [2024-11-21 04:53:42.655925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:26.057 [2024-11-21 04:53:42.655998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.057 [2024-11-21 04:53:42.658451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.057 [2024-11-21 04:53:42.658548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:26.057 pt1 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 malloc2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 [2024-11-21 04:53:42.693960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.057 [2024-11-21 04:53:42.694098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.057 [2024-11-21 04:53:42.694119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:26.057 [2024-11-21 04:53:42.694131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.057 [2024-11-21 04:53:42.696623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.057 [2024-11-21 04:53:42.696659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.057 pt2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 [2024-11-21 04:53:42.705995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:26.057 [2024-11-21 04:53:42.708034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.057 [2024-11-21 04:53:42.708209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:26.057 [2024-11-21 04:53:42.708242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.057 [2024-11-21 04:53:42.708504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:26.057 [2024-11-21 04:53:42.708676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:26.057 [2024-11-21 04:53:42.708688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:26.057 [2024-11-21 04:53:42.708815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.057 "name": "raid_bdev1", 00:07:26.057 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:26.057 "strip_size_kb": 64, 00:07:26.057 "state": "online", 00:07:26.057 "raid_level": "concat", 00:07:26.057 "superblock": true, 00:07:26.057 "num_base_bdevs": 2, 00:07:26.057 "num_base_bdevs_discovered": 2, 00:07:26.057 "num_base_bdevs_operational": 2, 00:07:26.057 "base_bdevs_list": [ 00:07:26.057 { 00:07:26.057 "name": "pt1", 00:07:26.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.057 "is_configured": true, 00:07:26.057 "data_offset": 2048, 00:07:26.057 "data_size": 63488 00:07:26.057 }, 00:07:26.057 { 00:07:26.057 "name": "pt2", 00:07:26.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.057 "is_configured": true, 00:07:26.057 "data_offset": 2048, 00:07:26.057 "data_size": 63488 00:07:26.057 } 00:07:26.057 ] 00:07:26.057 }' 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.057 04:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.626 [2024-11-21 04:53:43.185392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.626 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.626 "name": "raid_bdev1", 00:07:26.626 "aliases": [ 00:07:26.626 "68a3611c-0ab8-4529-83c8-f8e8f9f8e135" 00:07:26.626 ], 00:07:26.626 "product_name": "Raid Volume", 00:07:26.626 "block_size": 512, 00:07:26.626 "num_blocks": 126976, 00:07:26.626 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:26.626 "assigned_rate_limits": { 00:07:26.626 "rw_ios_per_sec": 0, 00:07:26.626 "rw_mbytes_per_sec": 0, 00:07:26.626 "r_mbytes_per_sec": 0, 00:07:26.626 "w_mbytes_per_sec": 0 00:07:26.626 }, 00:07:26.626 "claimed": false, 00:07:26.626 "zoned": false, 00:07:26.626 "supported_io_types": { 00:07:26.626 "read": true, 00:07:26.626 "write": true, 00:07:26.626 "unmap": true, 00:07:26.626 "flush": true, 00:07:26.626 "reset": true, 00:07:26.626 "nvme_admin": false, 00:07:26.626 "nvme_io": false, 00:07:26.626 "nvme_io_md": false, 00:07:26.626 "write_zeroes": true, 00:07:26.626 "zcopy": false, 00:07:26.626 "get_zone_info": false, 00:07:26.626 "zone_management": false, 00:07:26.626 "zone_append": false, 00:07:26.626 "compare": false, 00:07:26.626 "compare_and_write": false, 00:07:26.626 "abort": false, 00:07:26.626 "seek_hole": false, 00:07:26.626 "seek_data": false, 00:07:26.626 "copy": false, 00:07:26.626 "nvme_iov_md": false 00:07:26.626 }, 00:07:26.626 "memory_domains": [ 00:07:26.626 { 00:07:26.626 "dma_device_id": "system", 00:07:26.626 "dma_device_type": 1 00:07:26.626 }, 00:07:26.626 { 00:07:26.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.626 "dma_device_type": 2 00:07:26.626 }, 00:07:26.626 { 00:07:26.626 "dma_device_id": "system", 00:07:26.626 "dma_device_type": 1 00:07:26.626 }, 00:07:26.626 { 00:07:26.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.626 "dma_device_type": 2 00:07:26.626 } 00:07:26.626 ], 00:07:26.626 "driver_specific": { 00:07:26.626 "raid": { 00:07:26.626 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:26.626 "strip_size_kb": 64, 00:07:26.626 "state": "online", 00:07:26.626 "raid_level": "concat", 00:07:26.626 "superblock": true, 00:07:26.626 "num_base_bdevs": 2, 00:07:26.626 "num_base_bdevs_discovered": 2, 00:07:26.626 "num_base_bdevs_operational": 2, 00:07:26.626 "base_bdevs_list": [ 00:07:26.626 { 00:07:26.626 "name": "pt1", 00:07:26.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.626 "is_configured": true, 00:07:26.626 "data_offset": 2048, 00:07:26.627 "data_size": 63488 00:07:26.627 }, 00:07:26.627 { 00:07:26.627 "name": "pt2", 00:07:26.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.627 "is_configured": true, 00:07:26.627 "data_offset": 2048, 00:07:26.627 "data_size": 63488 00:07:26.627 } 00:07:26.627 ] 00:07:26.627 } 00:07:26.627 } 00:07:26.627 }' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.627 pt2' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.627 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 [2024-11-21 04:53:43.428926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=68a3611c-0ab8-4529-83c8-f8e8f9f8e135 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 68a3611c-0ab8-4529-83c8-f8e8f9f8e135 ']' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 [2024-11-21 04:53:43.472611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.888 [2024-11-21 04:53:43.472686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.888 [2024-11-21 04:53:43.472808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.888 [2024-11-21 04:53:43.472888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.888 [2024-11-21 04:53:43.472969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 [2024-11-21 04:53:43.592442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:26.888 [2024-11-21 04:53:43.594588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:26.888 [2024-11-21 04:53:43.594657] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:26.888 [2024-11-21 04:53:43.594709] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:26.888 [2024-11-21 04:53:43.594724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.888 [2024-11-21 04:53:43.594733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:26.888 request: 00:07:26.888 { 00:07:26.888 "name": "raid_bdev1", 00:07:26.888 "raid_level": "concat", 00:07:26.888 "base_bdevs": [ 00:07:26.888 "malloc1", 00:07:26.888 "malloc2" 00:07:26.888 ], 00:07:26.888 "strip_size_kb": 64, 00:07:26.888 "superblock": false, 00:07:26.888 "method": "bdev_raid_create", 00:07:26.888 "req_id": 1 00:07:26.888 } 00:07:26.888 Got JSON-RPC error response 00:07:26.888 response: 00:07:26.888 { 00:07:26.888 "code": -17, 00:07:26.888 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:26.888 } 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.888 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.149 [2024-11-21 04:53:43.648288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.149 [2024-11-21 04:53:43.648385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.149 [2024-11-21 04:53:43.648420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:27.149 [2024-11-21 04:53:43.648479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.149 [2024-11-21 04:53:43.650989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.149 [2024-11-21 04:53:43.651058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.149 [2024-11-21 04:53:43.651158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.149 [2024-11-21 04:53:43.651225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.149 pt1 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.149 "name": "raid_bdev1", 00:07:27.149 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:27.149 "strip_size_kb": 64, 00:07:27.149 "state": "configuring", 00:07:27.149 "raid_level": "concat", 00:07:27.149 "superblock": true, 00:07:27.149 "num_base_bdevs": 2, 00:07:27.149 "num_base_bdevs_discovered": 1, 00:07:27.149 "num_base_bdevs_operational": 2, 00:07:27.149 "base_bdevs_list": [ 00:07:27.149 { 00:07:27.149 "name": "pt1", 00:07:27.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.149 "is_configured": true, 00:07:27.149 "data_offset": 2048, 00:07:27.149 "data_size": 63488 00:07:27.149 }, 00:07:27.149 { 00:07:27.149 "name": null, 00:07:27.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.149 "is_configured": false, 00:07:27.149 "data_offset": 2048, 00:07:27.149 "data_size": 63488 00:07:27.149 } 00:07:27.149 ] 00:07:27.149 }' 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.149 04:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.410 [2024-11-21 04:53:44.091552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.410 [2024-11-21 04:53:44.091603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.410 [2024-11-21 04:53:44.091622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:27.410 [2024-11-21 04:53:44.091631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.410 [2024-11-21 04:53:44.091998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.410 [2024-11-21 04:53:44.092013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.410 [2024-11-21 04:53:44.092072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:27.410 [2024-11-21 04:53:44.092114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.410 [2024-11-21 04:53:44.092213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:27.410 [2024-11-21 04:53:44.092221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.410 [2024-11-21 04:53:44.092472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:27.410 [2024-11-21 04:53:44.092614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:27.410 [2024-11-21 04:53:44.092632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:27.410 [2024-11-21 04:53:44.092725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.410 pt2 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.410 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.670 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.670 "name": "raid_bdev1", 00:07:27.670 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:27.670 "strip_size_kb": 64, 00:07:27.670 "state": "online", 00:07:27.670 "raid_level": "concat", 00:07:27.670 "superblock": true, 00:07:27.670 "num_base_bdevs": 2, 00:07:27.670 "num_base_bdevs_discovered": 2, 00:07:27.670 "num_base_bdevs_operational": 2, 00:07:27.670 "base_bdevs_list": [ 00:07:27.670 { 00:07:27.670 "name": "pt1", 00:07:27.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.670 "is_configured": true, 00:07:27.670 "data_offset": 2048, 00:07:27.670 "data_size": 63488 00:07:27.670 }, 00:07:27.670 { 00:07:27.670 "name": "pt2", 00:07:27.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.670 "is_configured": true, 00:07:27.670 "data_offset": 2048, 00:07:27.670 "data_size": 63488 00:07:27.670 } 00:07:27.670 ] 00:07:27.670 }' 00:07:27.670 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.670 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.931 [2024-11-21 04:53:44.575030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.931 "name": "raid_bdev1", 00:07:27.931 "aliases": [ 00:07:27.931 "68a3611c-0ab8-4529-83c8-f8e8f9f8e135" 00:07:27.931 ], 00:07:27.931 "product_name": "Raid Volume", 00:07:27.931 "block_size": 512, 00:07:27.931 "num_blocks": 126976, 00:07:27.931 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:27.931 "assigned_rate_limits": { 00:07:27.931 "rw_ios_per_sec": 0, 00:07:27.931 "rw_mbytes_per_sec": 0, 00:07:27.931 "r_mbytes_per_sec": 0, 00:07:27.931 "w_mbytes_per_sec": 0 00:07:27.931 }, 00:07:27.931 "claimed": false, 00:07:27.931 "zoned": false, 00:07:27.931 "supported_io_types": { 00:07:27.931 "read": true, 00:07:27.931 "write": true, 00:07:27.931 "unmap": true, 00:07:27.931 "flush": true, 00:07:27.931 "reset": true, 00:07:27.931 "nvme_admin": false, 00:07:27.931 "nvme_io": false, 00:07:27.931 "nvme_io_md": false, 00:07:27.931 "write_zeroes": true, 00:07:27.931 "zcopy": false, 00:07:27.931 "get_zone_info": false, 00:07:27.931 "zone_management": false, 00:07:27.931 "zone_append": false, 00:07:27.931 "compare": false, 00:07:27.931 "compare_and_write": false, 00:07:27.931 "abort": false, 00:07:27.931 "seek_hole": false, 00:07:27.931 "seek_data": false, 00:07:27.931 "copy": false, 00:07:27.931 "nvme_iov_md": false 00:07:27.931 }, 00:07:27.931 "memory_domains": [ 00:07:27.931 { 00:07:27.931 "dma_device_id": "system", 00:07:27.931 "dma_device_type": 1 00:07:27.931 }, 00:07:27.931 { 00:07:27.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.931 "dma_device_type": 2 00:07:27.931 }, 00:07:27.931 { 00:07:27.931 "dma_device_id": "system", 00:07:27.931 "dma_device_type": 1 00:07:27.931 }, 00:07:27.931 { 00:07:27.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.931 "dma_device_type": 2 00:07:27.931 } 00:07:27.931 ], 00:07:27.931 "driver_specific": { 00:07:27.931 "raid": { 00:07:27.931 "uuid": "68a3611c-0ab8-4529-83c8-f8e8f9f8e135", 00:07:27.931 "strip_size_kb": 64, 00:07:27.931 "state": "online", 00:07:27.931 "raid_level": "concat", 00:07:27.931 "superblock": true, 00:07:27.931 "num_base_bdevs": 2, 00:07:27.931 "num_base_bdevs_discovered": 2, 00:07:27.931 "num_base_bdevs_operational": 2, 00:07:27.931 "base_bdevs_list": [ 00:07:27.931 { 00:07:27.931 "name": "pt1", 00:07:27.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.931 "is_configured": true, 00:07:27.931 "data_offset": 2048, 00:07:27.931 "data_size": 63488 00:07:27.931 }, 00:07:27.931 { 00:07:27.931 "name": "pt2", 00:07:27.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.931 "is_configured": true, 00:07:27.931 "data_offset": 2048, 00:07:27.931 "data_size": 63488 00:07:27.931 } 00:07:27.931 ] 00:07:27.931 } 00:07:27.931 } 00:07:27.931 }' 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.931 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.931 pt2' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.192 [2024-11-21 04:53:44.802570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 68a3611c-0ab8-4529-83c8-f8e8f9f8e135 '!=' 68a3611c-0ab8-4529-83c8-f8e8f9f8e135 ']' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73666 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73666 ']' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73666 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73666 00:07:28.192 killing process with pid 73666 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73666' 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73666 00:07:28.192 [2024-11-21 04:53:44.892967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.192 [2024-11-21 04:53:44.893045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.192 [2024-11-21 04:53:44.893114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.192 [2024-11-21 04:53:44.893124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:28.192 04:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73666 00:07:28.452 [2024-11-21 04:53:44.934893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.712 04:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.712 00:07:28.712 real 0m3.545s 00:07:28.712 user 0m5.334s 00:07:28.712 sys 0m0.818s 00:07:28.712 04:53:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.712 04:53:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.712 ************************************ 00:07:28.712 END TEST raid_superblock_test 00:07:28.712 ************************************ 00:07:28.712 04:53:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:28.712 04:53:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.712 04:53:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.712 04:53:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.712 ************************************ 00:07:28.712 START TEST raid_read_error_test 00:07:28.712 ************************************ 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.712 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MZFaown9g7 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73862 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73862 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73862 ']' 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.713 04:53:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.713 [2024-11-21 04:53:45.438079] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:28.713 [2024-11-21 04:53:45.438205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73862 ] 00:07:28.974 [2024-11-21 04:53:45.606654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.974 [2024-11-21 04:53:45.645688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.234 [2024-11-21 04:53:45.724644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.234 [2024-11-21 04:53:45.724702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 BaseBdev1_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 true 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 [2024-11-21 04:53:46.328618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.806 [2024-11-21 04:53:46.328696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.806 [2024-11-21 04:53:46.328732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:29.806 [2024-11-21 04:53:46.328744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.806 [2024-11-21 04:53:46.331473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.806 [2024-11-21 04:53:46.331511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.806 BaseBdev1 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 BaseBdev2_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 true 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 [2024-11-21 04:53:46.375735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.806 [2024-11-21 04:53:46.375812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.806 [2024-11-21 04:53:46.375837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:29.806 [2024-11-21 04:53:46.375846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.806 [2024-11-21 04:53:46.378679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.806 [2024-11-21 04:53:46.378779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.806 BaseBdev2 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 [2024-11-21 04:53:46.387789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.806 [2024-11-21 04:53:46.390095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.806 [2024-11-21 04:53:46.390310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.806 [2024-11-21 04:53:46.390323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.806 [2024-11-21 04:53:46.390627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:29.806 [2024-11-21 04:53:46.390797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.806 [2024-11-21 04:53:46.390816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:29.806 [2024-11-21 04:53:46.390983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.806 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.807 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.807 "name": "raid_bdev1", 00:07:29.807 "uuid": "642636da-3bbf-431b-8bdb-be9dd47a1009", 00:07:29.807 "strip_size_kb": 64, 00:07:29.807 "state": "online", 00:07:29.807 "raid_level": "concat", 00:07:29.807 "superblock": true, 00:07:29.807 "num_base_bdevs": 2, 00:07:29.807 "num_base_bdevs_discovered": 2, 00:07:29.807 "num_base_bdevs_operational": 2, 00:07:29.807 "base_bdevs_list": [ 00:07:29.807 { 00:07:29.807 "name": "BaseBdev1", 00:07:29.807 "uuid": "8d939633-3928-57da-9371-e838d6b97a86", 00:07:29.807 "is_configured": true, 00:07:29.807 "data_offset": 2048, 00:07:29.807 "data_size": 63488 00:07:29.807 }, 00:07:29.807 { 00:07:29.807 "name": "BaseBdev2", 00:07:29.807 "uuid": "a31b8d0c-8657-587d-a160-f5e832e524d6", 00:07:29.807 "is_configured": true, 00:07:29.807 "data_offset": 2048, 00:07:29.807 "data_size": 63488 00:07:29.807 } 00:07:29.807 ] 00:07:29.807 }' 00:07:29.807 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.807 04:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.377 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.377 04:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.377 [2024-11-21 04:53:46.915523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.316 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.317 "name": "raid_bdev1", 00:07:31.317 "uuid": "642636da-3bbf-431b-8bdb-be9dd47a1009", 00:07:31.317 "strip_size_kb": 64, 00:07:31.317 "state": "online", 00:07:31.317 "raid_level": "concat", 00:07:31.317 "superblock": true, 00:07:31.317 "num_base_bdevs": 2, 00:07:31.317 "num_base_bdevs_discovered": 2, 00:07:31.317 "num_base_bdevs_operational": 2, 00:07:31.317 "base_bdevs_list": [ 00:07:31.317 { 00:07:31.317 "name": "BaseBdev1", 00:07:31.317 "uuid": "8d939633-3928-57da-9371-e838d6b97a86", 00:07:31.317 "is_configured": true, 00:07:31.317 "data_offset": 2048, 00:07:31.317 "data_size": 63488 00:07:31.317 }, 00:07:31.317 { 00:07:31.317 "name": "BaseBdev2", 00:07:31.317 "uuid": "a31b8d0c-8657-587d-a160-f5e832e524d6", 00:07:31.317 "is_configured": true, 00:07:31.317 "data_offset": 2048, 00:07:31.317 "data_size": 63488 00:07:31.317 } 00:07:31.317 ] 00:07:31.317 }' 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.317 04:53:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.577 [2024-11-21 04:53:48.292462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.577 [2024-11-21 04:53:48.292617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.577 [2024-11-21 04:53:48.295193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.577 [2024-11-21 04:53:48.295281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.577 [2024-11-21 04:53:48.295349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.577 [2024-11-21 04:53:48.295391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:31.577 { 00:07:31.577 "results": [ 00:07:31.577 { 00:07:31.577 "job": "raid_bdev1", 00:07:31.577 "core_mask": "0x1", 00:07:31.577 "workload": "randrw", 00:07:31.577 "percentage": 50, 00:07:31.577 "status": "finished", 00:07:31.577 "queue_depth": 1, 00:07:31.577 "io_size": 131072, 00:07:31.577 "runtime": 1.377489, 00:07:31.577 "iops": 14698.483980634328, 00:07:31.577 "mibps": 1837.310497579291, 00:07:31.577 "io_failed": 1, 00:07:31.577 "io_timeout": 0, 00:07:31.577 "avg_latency_us": 95.42937651721276, 00:07:31.577 "min_latency_us": 25.7117903930131, 00:07:31.577 "max_latency_us": 1488.1537117903931 00:07:31.577 } 00:07:31.577 ], 00:07:31.577 "core_count": 1 00:07:31.577 } 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73862 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73862 ']' 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73862 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.577 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73862 00:07:31.837 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.837 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.837 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73862' 00:07:31.837 killing process with pid 73862 00:07:31.837 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73862 00:07:31.837 [2024-11-21 04:53:48.344435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.837 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73862 00:07:31.837 [2024-11-21 04:53:48.373699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MZFaown9g7 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:32.097 00:07:32.097 real 0m3.379s 00:07:32.097 user 0m4.168s 00:07:32.097 sys 0m0.630s 00:07:32.097 ************************************ 00:07:32.097 END TEST raid_read_error_test 00:07:32.097 ************************************ 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.097 04:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.097 04:53:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:32.097 04:53:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.097 04:53:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.097 04:53:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.097 ************************************ 00:07:32.097 START TEST raid_write_error_test 00:07:32.097 ************************************ 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5OkNRZW0P7 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73997 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73997 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73997 ']' 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.097 04:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.357 [2024-11-21 04:53:48.887788] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:32.357 [2024-11-21 04:53:48.887929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73997 ] 00:07:32.357 [2024-11-21 04:53:49.035982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.357 [2024-11-21 04:53:49.081421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.617 [2024-11-21 04:53:49.160845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.617 [2024-11-21 04:53:49.160885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.188 BaseBdev1_malloc 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.188 true 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.188 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 [2024-11-21 04:53:49.785493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.189 [2024-11-21 04:53:49.785567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.189 [2024-11-21 04:53:49.785590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:33.189 [2024-11-21 04:53:49.785599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.189 [2024-11-21 04:53:49.788173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.189 [2024-11-21 04:53:49.788208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.189 BaseBdev1 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 BaseBdev2_malloc 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 true 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 [2024-11-21 04:53:49.832387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.189 [2024-11-21 04:53:49.832441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.189 [2024-11-21 04:53:49.832460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:33.189 [2024-11-21 04:53:49.832470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.189 [2024-11-21 04:53:49.834837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.189 [2024-11-21 04:53:49.834876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.189 BaseBdev2 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 [2024-11-21 04:53:49.844429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.189 [2024-11-21 04:53:49.846660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.189 [2024-11-21 04:53:49.846834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.189 [2024-11-21 04:53:49.846847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.189 [2024-11-21 04:53:49.847146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:33.189 [2024-11-21 04:53:49.847313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.189 [2024-11-21 04:53:49.847328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:33.189 [2024-11-21 04:53:49.847452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.189 "name": "raid_bdev1", 00:07:33.189 "uuid": "a53f2e72-c3f4-44b1-9a86-a1316fbf1adc", 00:07:33.189 "strip_size_kb": 64, 00:07:33.189 "state": "online", 00:07:33.189 "raid_level": "concat", 00:07:33.189 "superblock": true, 00:07:33.189 "num_base_bdevs": 2, 00:07:33.189 "num_base_bdevs_discovered": 2, 00:07:33.189 "num_base_bdevs_operational": 2, 00:07:33.189 "base_bdevs_list": [ 00:07:33.189 { 00:07:33.189 "name": "BaseBdev1", 00:07:33.189 "uuid": "760668a4-258d-585a-ab08-52344297acc9", 00:07:33.189 "is_configured": true, 00:07:33.189 "data_offset": 2048, 00:07:33.189 "data_size": 63488 00:07:33.189 }, 00:07:33.189 { 00:07:33.189 "name": "BaseBdev2", 00:07:33.189 "uuid": "0b71ec28-319c-50d5-aa0d-434e4e0f4f0f", 00:07:33.189 "is_configured": true, 00:07:33.189 "data_offset": 2048, 00:07:33.189 "data_size": 63488 00:07:33.189 } 00:07:33.189 ] 00:07:33.189 }' 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.189 04:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.759 04:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.759 04:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.759 [2024-11-21 04:53:50.399852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.699 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.700 "name": "raid_bdev1", 00:07:34.700 "uuid": "a53f2e72-c3f4-44b1-9a86-a1316fbf1adc", 00:07:34.700 "strip_size_kb": 64, 00:07:34.700 "state": "online", 00:07:34.700 "raid_level": "concat", 00:07:34.700 "superblock": true, 00:07:34.700 "num_base_bdevs": 2, 00:07:34.700 "num_base_bdevs_discovered": 2, 00:07:34.700 "num_base_bdevs_operational": 2, 00:07:34.700 "base_bdevs_list": [ 00:07:34.700 { 00:07:34.700 "name": "BaseBdev1", 00:07:34.700 "uuid": "760668a4-258d-585a-ab08-52344297acc9", 00:07:34.700 "is_configured": true, 00:07:34.700 "data_offset": 2048, 00:07:34.700 "data_size": 63488 00:07:34.700 }, 00:07:34.700 { 00:07:34.700 "name": "BaseBdev2", 00:07:34.700 "uuid": "0b71ec28-319c-50d5-aa0d-434e4e0f4f0f", 00:07:34.700 "is_configured": true, 00:07:34.700 "data_offset": 2048, 00:07:34.700 "data_size": 63488 00:07:34.700 } 00:07:34.700 ] 00:07:34.700 }' 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.700 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.270 [2024-11-21 04:53:51.740498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.270 [2024-11-21 04:53:51.740559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.270 [2024-11-21 04:53:51.742985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.270 [2024-11-21 04:53:51.743024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.270 [2024-11-21 04:53:51.743062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.270 [2024-11-21 04:53:51.743083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:35.270 { 00:07:35.270 "results": [ 00:07:35.270 { 00:07:35.270 "job": "raid_bdev1", 00:07:35.270 "core_mask": "0x1", 00:07:35.270 "workload": "randrw", 00:07:35.270 "percentage": 50, 00:07:35.270 "status": "finished", 00:07:35.270 "queue_depth": 1, 00:07:35.270 "io_size": 131072, 00:07:35.270 "runtime": 1.341108, 00:07:35.270 "iops": 15184.459417138665, 00:07:35.270 "mibps": 1898.0574271423332, 00:07:35.270 "io_failed": 1, 00:07:35.270 "io_timeout": 0, 00:07:35.270 "avg_latency_us": 92.21990790346912, 00:07:35.270 "min_latency_us": 25.823580786026202, 00:07:35.270 "max_latency_us": 1352.216593886463 00:07:35.270 } 00:07:35.270 ], 00:07:35.270 "core_count": 1 00:07:35.270 } 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73997 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73997 ']' 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73997 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73997 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73997' 00:07:35.270 killing process with pid 73997 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73997 00:07:35.270 [2024-11-21 04:53:51.776865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.270 04:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73997 00:07:35.270 [2024-11-21 04:53:51.805880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5OkNRZW0P7 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:35.531 00:07:35.531 real 0m3.359s 00:07:35.531 user 0m4.161s 00:07:35.531 sys 0m0.596s 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.531 ************************************ 00:07:35.531 END TEST raid_write_error_test 00:07:35.531 ************************************ 00:07:35.531 04:53:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.531 04:53:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.531 04:53:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:35.531 04:53:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.531 04:53:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.531 04:53:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.531 ************************************ 00:07:35.531 START TEST raid_state_function_test 00:07:35.531 ************************************ 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.531 Process raid pid: 74124 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74124 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74124' 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74124 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74124 ']' 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.531 04:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.794 [2024-11-21 04:53:52.315655] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:35.794 [2024-11-21 04:53:52.315889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.794 [2024-11-21 04:53:52.489377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.054 [2024-11-21 04:53:52.533864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.054 [2024-11-21 04:53:52.613150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.054 [2024-11-21 04:53:52.613301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.624 [2024-11-21 04:53:53.158109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.624 [2024-11-21 04:53:53.158296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.624 [2024-11-21 04:53:53.158329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.624 [2024-11-21 04:53:53.158355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.624 "name": "Existed_Raid", 00:07:36.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.624 "strip_size_kb": 0, 00:07:36.624 "state": "configuring", 00:07:36.624 "raid_level": "raid1", 00:07:36.624 "superblock": false, 00:07:36.624 "num_base_bdevs": 2, 00:07:36.624 "num_base_bdevs_discovered": 0, 00:07:36.624 "num_base_bdevs_operational": 2, 00:07:36.624 "base_bdevs_list": [ 00:07:36.624 { 00:07:36.624 "name": "BaseBdev1", 00:07:36.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.624 "is_configured": false, 00:07:36.624 "data_offset": 0, 00:07:36.624 "data_size": 0 00:07:36.624 }, 00:07:36.624 { 00:07:36.624 "name": "BaseBdev2", 00:07:36.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.624 "is_configured": false, 00:07:36.624 "data_offset": 0, 00:07:36.624 "data_size": 0 00:07:36.624 } 00:07:36.624 ] 00:07:36.624 }' 00:07:36.624 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.625 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.885 [2024-11-21 04:53:53.597298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.885 [2024-11-21 04:53:53.597364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.885 [2024-11-21 04:53:53.609250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.885 [2024-11-21 04:53:53.609307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.885 [2024-11-21 04:53:53.609316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.885 [2024-11-21 04:53:53.609327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.885 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 [2024-11-21 04:53:53.636627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.146 BaseBdev1 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 [ 00:07:37.146 { 00:07:37.146 "name": "BaseBdev1", 00:07:37.146 "aliases": [ 00:07:37.146 "4b5462de-a048-4289-840c-78a30e3e6ea2" 00:07:37.146 ], 00:07:37.146 "product_name": "Malloc disk", 00:07:37.146 "block_size": 512, 00:07:37.146 "num_blocks": 65536, 00:07:37.146 "uuid": "4b5462de-a048-4289-840c-78a30e3e6ea2", 00:07:37.146 "assigned_rate_limits": { 00:07:37.146 "rw_ios_per_sec": 0, 00:07:37.146 "rw_mbytes_per_sec": 0, 00:07:37.146 "r_mbytes_per_sec": 0, 00:07:37.146 "w_mbytes_per_sec": 0 00:07:37.146 }, 00:07:37.146 "claimed": true, 00:07:37.146 "claim_type": "exclusive_write", 00:07:37.146 "zoned": false, 00:07:37.146 "supported_io_types": { 00:07:37.146 "read": true, 00:07:37.146 "write": true, 00:07:37.146 "unmap": true, 00:07:37.146 "flush": true, 00:07:37.146 "reset": true, 00:07:37.146 "nvme_admin": false, 00:07:37.146 "nvme_io": false, 00:07:37.146 "nvme_io_md": false, 00:07:37.146 "write_zeroes": true, 00:07:37.146 "zcopy": true, 00:07:37.146 "get_zone_info": false, 00:07:37.146 "zone_management": false, 00:07:37.146 "zone_append": false, 00:07:37.146 "compare": false, 00:07:37.146 "compare_and_write": false, 00:07:37.146 "abort": true, 00:07:37.146 "seek_hole": false, 00:07:37.146 "seek_data": false, 00:07:37.146 "copy": true, 00:07:37.146 "nvme_iov_md": false 00:07:37.146 }, 00:07:37.146 "memory_domains": [ 00:07:37.146 { 00:07:37.146 "dma_device_id": "system", 00:07:37.146 "dma_device_type": 1 00:07:37.146 }, 00:07:37.146 { 00:07:37.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.146 "dma_device_type": 2 00:07:37.146 } 00:07:37.146 ], 00:07:37.146 "driver_specific": {} 00:07:37.146 } 00:07:37.146 ] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.146 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.146 "name": "Existed_Raid", 00:07:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.146 "strip_size_kb": 0, 00:07:37.146 "state": "configuring", 00:07:37.146 "raid_level": "raid1", 00:07:37.146 "superblock": false, 00:07:37.146 "num_base_bdevs": 2, 00:07:37.146 "num_base_bdevs_discovered": 1, 00:07:37.146 "num_base_bdevs_operational": 2, 00:07:37.146 "base_bdevs_list": [ 00:07:37.146 { 00:07:37.146 "name": "BaseBdev1", 00:07:37.146 "uuid": "4b5462de-a048-4289-840c-78a30e3e6ea2", 00:07:37.146 "is_configured": true, 00:07:37.146 "data_offset": 0, 00:07:37.146 "data_size": 65536 00:07:37.146 }, 00:07:37.146 { 00:07:37.146 "name": "BaseBdev2", 00:07:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.147 "is_configured": false, 00:07:37.147 "data_offset": 0, 00:07:37.147 "data_size": 0 00:07:37.147 } 00:07:37.147 ] 00:07:37.147 }' 00:07:37.147 04:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.147 04:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 [2024-11-21 04:53:54.119874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.408 [2024-11-21 04:53:54.119959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 [2024-11-21 04:53:54.131843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.408 [2024-11-21 04:53:54.134351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.408 [2024-11-21 04:53:54.134402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.408 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.668 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.668 "name": "Existed_Raid", 00:07:37.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.668 "strip_size_kb": 0, 00:07:37.669 "state": "configuring", 00:07:37.669 "raid_level": "raid1", 00:07:37.669 "superblock": false, 00:07:37.669 "num_base_bdevs": 2, 00:07:37.669 "num_base_bdevs_discovered": 1, 00:07:37.669 "num_base_bdevs_operational": 2, 00:07:37.669 "base_bdevs_list": [ 00:07:37.669 { 00:07:37.669 "name": "BaseBdev1", 00:07:37.669 "uuid": "4b5462de-a048-4289-840c-78a30e3e6ea2", 00:07:37.669 "is_configured": true, 00:07:37.669 "data_offset": 0, 00:07:37.669 "data_size": 65536 00:07:37.669 }, 00:07:37.669 { 00:07:37.669 "name": "BaseBdev2", 00:07:37.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.669 "is_configured": false, 00:07:37.669 "data_offset": 0, 00:07:37.669 "data_size": 0 00:07:37.669 } 00:07:37.669 ] 00:07:37.669 }' 00:07:37.669 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.669 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 [2024-11-21 04:53:54.532005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.929 [2024-11-21 04:53:54.532166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:37.929 [2024-11-21 04:53:54.532195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:37.929 [2024-11-21 04:53:54.532569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:37.929 [2024-11-21 04:53:54.532800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:37.929 [2024-11-21 04:53:54.532850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:37.929 [2024-11-21 04:53:54.533166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.929 BaseBdev2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 [ 00:07:37.929 { 00:07:37.929 "name": "BaseBdev2", 00:07:37.929 "aliases": [ 00:07:37.929 "d8af6a0e-f0a8-47e6-b582-7d4a6f1c7905" 00:07:37.929 ], 00:07:37.929 "product_name": "Malloc disk", 00:07:37.929 "block_size": 512, 00:07:37.929 "num_blocks": 65536, 00:07:37.929 "uuid": "d8af6a0e-f0a8-47e6-b582-7d4a6f1c7905", 00:07:37.929 "assigned_rate_limits": { 00:07:37.929 "rw_ios_per_sec": 0, 00:07:37.929 "rw_mbytes_per_sec": 0, 00:07:37.929 "r_mbytes_per_sec": 0, 00:07:37.929 "w_mbytes_per_sec": 0 00:07:37.929 }, 00:07:37.929 "claimed": true, 00:07:37.929 "claim_type": "exclusive_write", 00:07:37.929 "zoned": false, 00:07:37.929 "supported_io_types": { 00:07:37.929 "read": true, 00:07:37.929 "write": true, 00:07:37.929 "unmap": true, 00:07:37.929 "flush": true, 00:07:37.929 "reset": true, 00:07:37.929 "nvme_admin": false, 00:07:37.929 "nvme_io": false, 00:07:37.929 "nvme_io_md": false, 00:07:37.929 "write_zeroes": true, 00:07:37.929 "zcopy": true, 00:07:37.929 "get_zone_info": false, 00:07:37.929 "zone_management": false, 00:07:37.929 "zone_append": false, 00:07:37.929 "compare": false, 00:07:37.929 "compare_and_write": false, 00:07:37.929 "abort": true, 00:07:37.929 "seek_hole": false, 00:07:37.929 "seek_data": false, 00:07:37.929 "copy": true, 00:07:37.929 "nvme_iov_md": false 00:07:37.929 }, 00:07:37.929 "memory_domains": [ 00:07:37.929 { 00:07:37.929 "dma_device_id": "system", 00:07:37.929 "dma_device_type": 1 00:07:37.929 }, 00:07:37.929 { 00:07:37.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.929 "dma_device_type": 2 00:07:37.929 } 00:07:37.929 ], 00:07:37.929 "driver_specific": {} 00:07:37.929 } 00:07:37.929 ] 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.929 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.930 "name": "Existed_Raid", 00:07:37.930 "uuid": "4584a5c6-195a-42ae-8f1d-29655dd3a251", 00:07:37.930 "strip_size_kb": 0, 00:07:37.930 "state": "online", 00:07:37.930 "raid_level": "raid1", 00:07:37.930 "superblock": false, 00:07:37.930 "num_base_bdevs": 2, 00:07:37.930 "num_base_bdevs_discovered": 2, 00:07:37.930 "num_base_bdevs_operational": 2, 00:07:37.930 "base_bdevs_list": [ 00:07:37.930 { 00:07:37.930 "name": "BaseBdev1", 00:07:37.930 "uuid": "4b5462de-a048-4289-840c-78a30e3e6ea2", 00:07:37.930 "is_configured": true, 00:07:37.930 "data_offset": 0, 00:07:37.930 "data_size": 65536 00:07:37.930 }, 00:07:37.930 { 00:07:37.930 "name": "BaseBdev2", 00:07:37.930 "uuid": "d8af6a0e-f0a8-47e6-b582-7d4a6f1c7905", 00:07:37.930 "is_configured": true, 00:07:37.930 "data_offset": 0, 00:07:37.930 "data_size": 65536 00:07:37.930 } 00:07:37.930 ] 00:07:37.930 }' 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.930 04:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 [2024-11-21 04:53:55.035649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.500 "name": "Existed_Raid", 00:07:38.500 "aliases": [ 00:07:38.500 "4584a5c6-195a-42ae-8f1d-29655dd3a251" 00:07:38.500 ], 00:07:38.500 "product_name": "Raid Volume", 00:07:38.500 "block_size": 512, 00:07:38.500 "num_blocks": 65536, 00:07:38.500 "uuid": "4584a5c6-195a-42ae-8f1d-29655dd3a251", 00:07:38.500 "assigned_rate_limits": { 00:07:38.500 "rw_ios_per_sec": 0, 00:07:38.500 "rw_mbytes_per_sec": 0, 00:07:38.500 "r_mbytes_per_sec": 0, 00:07:38.500 "w_mbytes_per_sec": 0 00:07:38.500 }, 00:07:38.500 "claimed": false, 00:07:38.500 "zoned": false, 00:07:38.500 "supported_io_types": { 00:07:38.500 "read": true, 00:07:38.500 "write": true, 00:07:38.500 "unmap": false, 00:07:38.500 "flush": false, 00:07:38.500 "reset": true, 00:07:38.500 "nvme_admin": false, 00:07:38.500 "nvme_io": false, 00:07:38.500 "nvme_io_md": false, 00:07:38.500 "write_zeroes": true, 00:07:38.500 "zcopy": false, 00:07:38.500 "get_zone_info": false, 00:07:38.500 "zone_management": false, 00:07:38.500 "zone_append": false, 00:07:38.500 "compare": false, 00:07:38.500 "compare_and_write": false, 00:07:38.500 "abort": false, 00:07:38.500 "seek_hole": false, 00:07:38.500 "seek_data": false, 00:07:38.500 "copy": false, 00:07:38.500 "nvme_iov_md": false 00:07:38.500 }, 00:07:38.500 "memory_domains": [ 00:07:38.500 { 00:07:38.500 "dma_device_id": "system", 00:07:38.500 "dma_device_type": 1 00:07:38.500 }, 00:07:38.500 { 00:07:38.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.500 "dma_device_type": 2 00:07:38.500 }, 00:07:38.500 { 00:07:38.500 "dma_device_id": "system", 00:07:38.500 "dma_device_type": 1 00:07:38.500 }, 00:07:38.500 { 00:07:38.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.500 "dma_device_type": 2 00:07:38.500 } 00:07:38.500 ], 00:07:38.500 "driver_specific": { 00:07:38.500 "raid": { 00:07:38.500 "uuid": "4584a5c6-195a-42ae-8f1d-29655dd3a251", 00:07:38.500 "strip_size_kb": 0, 00:07:38.500 "state": "online", 00:07:38.500 "raid_level": "raid1", 00:07:38.500 "superblock": false, 00:07:38.500 "num_base_bdevs": 2, 00:07:38.500 "num_base_bdevs_discovered": 2, 00:07:38.500 "num_base_bdevs_operational": 2, 00:07:38.500 "base_bdevs_list": [ 00:07:38.500 { 00:07:38.500 "name": "BaseBdev1", 00:07:38.500 "uuid": "4b5462de-a048-4289-840c-78a30e3e6ea2", 00:07:38.500 "is_configured": true, 00:07:38.500 "data_offset": 0, 00:07:38.500 "data_size": 65536 00:07:38.500 }, 00:07:38.500 { 00:07:38.500 "name": "BaseBdev2", 00:07:38.500 "uuid": "d8af6a0e-f0a8-47e6-b582-7d4a6f1c7905", 00:07:38.500 "is_configured": true, 00:07:38.500 "data_offset": 0, 00:07:38.500 "data_size": 65536 00:07:38.500 } 00:07:38.500 ] 00:07:38.500 } 00:07:38.500 } 00:07:38.500 }' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.500 BaseBdev2' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.500 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.501 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.501 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.501 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.501 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.761 [2024-11-21 04:53:55.266858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.761 "name": "Existed_Raid", 00:07:38.761 "uuid": "4584a5c6-195a-42ae-8f1d-29655dd3a251", 00:07:38.761 "strip_size_kb": 0, 00:07:38.761 "state": "online", 00:07:38.761 "raid_level": "raid1", 00:07:38.761 "superblock": false, 00:07:38.761 "num_base_bdevs": 2, 00:07:38.761 "num_base_bdevs_discovered": 1, 00:07:38.761 "num_base_bdevs_operational": 1, 00:07:38.761 "base_bdevs_list": [ 00:07:38.761 { 00:07:38.761 "name": null, 00:07:38.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.761 "is_configured": false, 00:07:38.761 "data_offset": 0, 00:07:38.761 "data_size": 65536 00:07:38.761 }, 00:07:38.761 { 00:07:38.761 "name": "BaseBdev2", 00:07:38.761 "uuid": "d8af6a0e-f0a8-47e6-b582-7d4a6f1c7905", 00:07:38.761 "is_configured": true, 00:07:38.761 "data_offset": 0, 00:07:38.761 "data_size": 65536 00:07:38.761 } 00:07:38.761 ] 00:07:38.761 }' 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.761 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.021 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.022 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 [2024-11-21 04:53:55.779155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.282 [2024-11-21 04:53:55.779267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.282 [2024-11-21 04:53:55.800418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.282 [2024-11-21 04:53:55.800494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.282 [2024-11-21 04:53:55.800516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74124 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74124 ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74124 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74124 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.282 killing process with pid 74124 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74124' 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74124 00:07:39.282 [2024-11-21 04:53:55.902255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.282 04:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74124 00:07:39.282 [2024-11-21 04:53:55.903879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.543 04:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.543 00:07:39.543 real 0m4.004s 00:07:39.543 user 0m6.177s 00:07:39.543 sys 0m0.861s 00:07:39.543 ************************************ 00:07:39.543 END TEST raid_state_function_test 00:07:39.543 04:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.543 04:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.543 ************************************ 00:07:39.803 04:53:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:39.803 04:53:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.803 04:53:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.803 04:53:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.803 ************************************ 00:07:39.803 START TEST raid_state_function_test_sb 00:07:39.803 ************************************ 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.803 Process raid pid: 74366 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74366 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74366' 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74366 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74366 ']' 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.803 04:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.803 [2024-11-21 04:53:56.385996] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:39.803 [2024-11-21 04:53:56.386152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.063 [2024-11-21 04:53:56.555251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.063 [2024-11-21 04:53:56.597941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.063 [2024-11-21 04:53:56.674948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.063 [2024-11-21 04:53:56.674994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.634 [2024-11-21 04:53:57.222781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.634 [2024-11-21 04:53:57.222845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.634 [2024-11-21 04:53:57.222865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.634 [2024-11-21 04:53:57.222876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.634 "name": "Existed_Raid", 00:07:40.634 "uuid": "e260de59-7220-4fd8-a60a-2299810ab9c5", 00:07:40.634 "strip_size_kb": 0, 00:07:40.634 "state": "configuring", 00:07:40.634 "raid_level": "raid1", 00:07:40.634 "superblock": true, 00:07:40.634 "num_base_bdevs": 2, 00:07:40.634 "num_base_bdevs_discovered": 0, 00:07:40.634 "num_base_bdevs_operational": 2, 00:07:40.634 "base_bdevs_list": [ 00:07:40.634 { 00:07:40.634 "name": "BaseBdev1", 00:07:40.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.634 "is_configured": false, 00:07:40.634 "data_offset": 0, 00:07:40.634 "data_size": 0 00:07:40.634 }, 00:07:40.634 { 00:07:40.634 "name": "BaseBdev2", 00:07:40.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.634 "is_configured": false, 00:07:40.634 "data_offset": 0, 00:07:40.634 "data_size": 0 00:07:40.634 } 00:07:40.634 ] 00:07:40.634 }' 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.634 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [2024-11-21 04:53:57.645927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.206 [2024-11-21 04:53:57.646046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [2024-11-21 04:53:57.657934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.206 [2024-11-21 04:53:57.658016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.206 [2024-11-21 04:53:57.658045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.206 [2024-11-21 04:53:57.658069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [2024-11-21 04:53:57.685275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.206 BaseBdev1 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [ 00:07:41.206 { 00:07:41.206 "name": "BaseBdev1", 00:07:41.206 "aliases": [ 00:07:41.206 "52911b84-132a-47b2-9758-43c82b365614" 00:07:41.206 ], 00:07:41.206 "product_name": "Malloc disk", 00:07:41.206 "block_size": 512, 00:07:41.206 "num_blocks": 65536, 00:07:41.206 "uuid": "52911b84-132a-47b2-9758-43c82b365614", 00:07:41.206 "assigned_rate_limits": { 00:07:41.206 "rw_ios_per_sec": 0, 00:07:41.206 "rw_mbytes_per_sec": 0, 00:07:41.206 "r_mbytes_per_sec": 0, 00:07:41.206 "w_mbytes_per_sec": 0 00:07:41.206 }, 00:07:41.206 "claimed": true, 00:07:41.206 "claim_type": "exclusive_write", 00:07:41.206 "zoned": false, 00:07:41.206 "supported_io_types": { 00:07:41.206 "read": true, 00:07:41.206 "write": true, 00:07:41.206 "unmap": true, 00:07:41.206 "flush": true, 00:07:41.206 "reset": true, 00:07:41.206 "nvme_admin": false, 00:07:41.206 "nvme_io": false, 00:07:41.206 "nvme_io_md": false, 00:07:41.206 "write_zeroes": true, 00:07:41.206 "zcopy": true, 00:07:41.206 "get_zone_info": false, 00:07:41.206 "zone_management": false, 00:07:41.206 "zone_append": false, 00:07:41.206 "compare": false, 00:07:41.206 "compare_and_write": false, 00:07:41.206 "abort": true, 00:07:41.206 "seek_hole": false, 00:07:41.206 "seek_data": false, 00:07:41.206 "copy": true, 00:07:41.206 "nvme_iov_md": false 00:07:41.206 }, 00:07:41.206 "memory_domains": [ 00:07:41.206 { 00:07:41.206 "dma_device_id": "system", 00:07:41.206 "dma_device_type": 1 00:07:41.206 }, 00:07:41.206 { 00:07:41.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.206 "dma_device_type": 2 00:07:41.206 } 00:07:41.206 ], 00:07:41.206 "driver_specific": {} 00:07:41.206 } 00:07:41.206 ] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.206 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.207 "name": "Existed_Raid", 00:07:41.207 "uuid": "cbf797ac-2e13-49f1-8e97-734d96ec19bb", 00:07:41.207 "strip_size_kb": 0, 00:07:41.207 "state": "configuring", 00:07:41.207 "raid_level": "raid1", 00:07:41.207 "superblock": true, 00:07:41.207 "num_base_bdevs": 2, 00:07:41.207 "num_base_bdevs_discovered": 1, 00:07:41.207 "num_base_bdevs_operational": 2, 00:07:41.207 "base_bdevs_list": [ 00:07:41.207 { 00:07:41.207 "name": "BaseBdev1", 00:07:41.207 "uuid": "52911b84-132a-47b2-9758-43c82b365614", 00:07:41.207 "is_configured": true, 00:07:41.207 "data_offset": 2048, 00:07:41.207 "data_size": 63488 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "name": "BaseBdev2", 00:07:41.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.207 "is_configured": false, 00:07:41.207 "data_offset": 0, 00:07:41.207 "data_size": 0 00:07:41.207 } 00:07:41.207 ] 00:07:41.207 }' 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.207 04:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.467 [2024-11-21 04:53:58.192484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.467 [2024-11-21 04:53:58.192676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.467 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.737 [2024-11-21 04:53:58.200465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.737 [2024-11-21 04:53:58.202733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.737 [2024-11-21 04:53:58.202788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.737 "name": "Existed_Raid", 00:07:41.737 "uuid": "b6dfe823-9298-4581-b04f-de35abdd2697", 00:07:41.737 "strip_size_kb": 0, 00:07:41.737 "state": "configuring", 00:07:41.737 "raid_level": "raid1", 00:07:41.737 "superblock": true, 00:07:41.737 "num_base_bdevs": 2, 00:07:41.737 "num_base_bdevs_discovered": 1, 00:07:41.737 "num_base_bdevs_operational": 2, 00:07:41.737 "base_bdevs_list": [ 00:07:41.737 { 00:07:41.737 "name": "BaseBdev1", 00:07:41.737 "uuid": "52911b84-132a-47b2-9758-43c82b365614", 00:07:41.737 "is_configured": true, 00:07:41.737 "data_offset": 2048, 00:07:41.737 "data_size": 63488 00:07:41.737 }, 00:07:41.737 { 00:07:41.737 "name": "BaseBdev2", 00:07:41.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.737 "is_configured": false, 00:07:41.737 "data_offset": 0, 00:07:41.737 "data_size": 0 00:07:41.737 } 00:07:41.737 ] 00:07:41.737 }' 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.737 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.007 [2024-11-21 04:53:58.644695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.007 [2024-11-21 04:53:58.645128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:42.007 [2024-11-21 04:53:58.645199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:42.007 [2024-11-21 04:53:58.645606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:42.007 BaseBdev2 00:07:42.007 [2024-11-21 04:53:58.645862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:42.007 [2024-11-21 04:53:58.645934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:42.007 [2024-11-21 04:53:58.646183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.007 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.007 [ 00:07:42.007 { 00:07:42.007 "name": "BaseBdev2", 00:07:42.007 "aliases": [ 00:07:42.007 "7c6deaf6-3a80-493b-9535-e36ea82a6021" 00:07:42.007 ], 00:07:42.007 "product_name": "Malloc disk", 00:07:42.007 "block_size": 512, 00:07:42.007 "num_blocks": 65536, 00:07:42.007 "uuid": "7c6deaf6-3a80-493b-9535-e36ea82a6021", 00:07:42.007 "assigned_rate_limits": { 00:07:42.007 "rw_ios_per_sec": 0, 00:07:42.007 "rw_mbytes_per_sec": 0, 00:07:42.008 "r_mbytes_per_sec": 0, 00:07:42.008 "w_mbytes_per_sec": 0 00:07:42.008 }, 00:07:42.008 "claimed": true, 00:07:42.008 "claim_type": "exclusive_write", 00:07:42.008 "zoned": false, 00:07:42.008 "supported_io_types": { 00:07:42.008 "read": true, 00:07:42.008 "write": true, 00:07:42.008 "unmap": true, 00:07:42.008 "flush": true, 00:07:42.008 "reset": true, 00:07:42.008 "nvme_admin": false, 00:07:42.008 "nvme_io": false, 00:07:42.008 "nvme_io_md": false, 00:07:42.008 "write_zeroes": true, 00:07:42.008 "zcopy": true, 00:07:42.008 "get_zone_info": false, 00:07:42.008 "zone_management": false, 00:07:42.008 "zone_append": false, 00:07:42.008 "compare": false, 00:07:42.008 "compare_and_write": false, 00:07:42.008 "abort": true, 00:07:42.008 "seek_hole": false, 00:07:42.008 "seek_data": false, 00:07:42.008 "copy": true, 00:07:42.008 "nvme_iov_md": false 00:07:42.008 }, 00:07:42.008 "memory_domains": [ 00:07:42.008 { 00:07:42.008 "dma_device_id": "system", 00:07:42.008 "dma_device_type": 1 00:07:42.008 }, 00:07:42.008 { 00:07:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.008 "dma_device_type": 2 00:07:42.008 } 00:07:42.008 ], 00:07:42.008 "driver_specific": {} 00:07:42.008 } 00:07:42.008 ] 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.008 "name": "Existed_Raid", 00:07:42.008 "uuid": "b6dfe823-9298-4581-b04f-de35abdd2697", 00:07:42.008 "strip_size_kb": 0, 00:07:42.008 "state": "online", 00:07:42.008 "raid_level": "raid1", 00:07:42.008 "superblock": true, 00:07:42.008 "num_base_bdevs": 2, 00:07:42.008 "num_base_bdevs_discovered": 2, 00:07:42.008 "num_base_bdevs_operational": 2, 00:07:42.008 "base_bdevs_list": [ 00:07:42.008 { 00:07:42.008 "name": "BaseBdev1", 00:07:42.008 "uuid": "52911b84-132a-47b2-9758-43c82b365614", 00:07:42.008 "is_configured": true, 00:07:42.008 "data_offset": 2048, 00:07:42.008 "data_size": 63488 00:07:42.008 }, 00:07:42.008 { 00:07:42.008 "name": "BaseBdev2", 00:07:42.008 "uuid": "7c6deaf6-3a80-493b-9535-e36ea82a6021", 00:07:42.008 "is_configured": true, 00:07:42.008 "data_offset": 2048, 00:07:42.008 "data_size": 63488 00:07:42.008 } 00:07:42.008 ] 00:07:42.008 }' 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.008 04:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.579 [2024-11-21 04:53:59.136294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.579 "name": "Existed_Raid", 00:07:42.579 "aliases": [ 00:07:42.579 "b6dfe823-9298-4581-b04f-de35abdd2697" 00:07:42.579 ], 00:07:42.579 "product_name": "Raid Volume", 00:07:42.579 "block_size": 512, 00:07:42.579 "num_blocks": 63488, 00:07:42.579 "uuid": "b6dfe823-9298-4581-b04f-de35abdd2697", 00:07:42.579 "assigned_rate_limits": { 00:07:42.579 "rw_ios_per_sec": 0, 00:07:42.579 "rw_mbytes_per_sec": 0, 00:07:42.579 "r_mbytes_per_sec": 0, 00:07:42.579 "w_mbytes_per_sec": 0 00:07:42.579 }, 00:07:42.579 "claimed": false, 00:07:42.579 "zoned": false, 00:07:42.579 "supported_io_types": { 00:07:42.579 "read": true, 00:07:42.579 "write": true, 00:07:42.579 "unmap": false, 00:07:42.579 "flush": false, 00:07:42.579 "reset": true, 00:07:42.579 "nvme_admin": false, 00:07:42.579 "nvme_io": false, 00:07:42.579 "nvme_io_md": false, 00:07:42.579 "write_zeroes": true, 00:07:42.579 "zcopy": false, 00:07:42.579 "get_zone_info": false, 00:07:42.579 "zone_management": false, 00:07:42.579 "zone_append": false, 00:07:42.579 "compare": false, 00:07:42.579 "compare_and_write": false, 00:07:42.579 "abort": false, 00:07:42.579 "seek_hole": false, 00:07:42.579 "seek_data": false, 00:07:42.579 "copy": false, 00:07:42.579 "nvme_iov_md": false 00:07:42.579 }, 00:07:42.579 "memory_domains": [ 00:07:42.579 { 00:07:42.579 "dma_device_id": "system", 00:07:42.579 "dma_device_type": 1 00:07:42.579 }, 00:07:42.579 { 00:07:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.579 "dma_device_type": 2 00:07:42.579 }, 00:07:42.579 { 00:07:42.579 "dma_device_id": "system", 00:07:42.579 "dma_device_type": 1 00:07:42.579 }, 00:07:42.579 { 00:07:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.579 "dma_device_type": 2 00:07:42.579 } 00:07:42.579 ], 00:07:42.579 "driver_specific": { 00:07:42.579 "raid": { 00:07:42.579 "uuid": "b6dfe823-9298-4581-b04f-de35abdd2697", 00:07:42.579 "strip_size_kb": 0, 00:07:42.579 "state": "online", 00:07:42.579 "raid_level": "raid1", 00:07:42.579 "superblock": true, 00:07:42.579 "num_base_bdevs": 2, 00:07:42.579 "num_base_bdevs_discovered": 2, 00:07:42.579 "num_base_bdevs_operational": 2, 00:07:42.579 "base_bdevs_list": [ 00:07:42.579 { 00:07:42.579 "name": "BaseBdev1", 00:07:42.579 "uuid": "52911b84-132a-47b2-9758-43c82b365614", 00:07:42.579 "is_configured": true, 00:07:42.579 "data_offset": 2048, 00:07:42.579 "data_size": 63488 00:07:42.579 }, 00:07:42.579 { 00:07:42.579 "name": "BaseBdev2", 00:07:42.579 "uuid": "7c6deaf6-3a80-493b-9535-e36ea82a6021", 00:07:42.579 "is_configured": true, 00:07:42.579 "data_offset": 2048, 00:07:42.579 "data_size": 63488 00:07:42.579 } 00:07:42.579 ] 00:07:42.579 } 00:07:42.579 } 00:07:42.579 }' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.579 BaseBdev2' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.579 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.580 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.840 [2024-11-21 04:53:59.339664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.840 "name": "Existed_Raid", 00:07:42.840 "uuid": "b6dfe823-9298-4581-b04f-de35abdd2697", 00:07:42.840 "strip_size_kb": 0, 00:07:42.840 "state": "online", 00:07:42.840 "raid_level": "raid1", 00:07:42.840 "superblock": true, 00:07:42.840 "num_base_bdevs": 2, 00:07:42.840 "num_base_bdevs_discovered": 1, 00:07:42.840 "num_base_bdevs_operational": 1, 00:07:42.840 "base_bdevs_list": [ 00:07:42.840 { 00:07:42.840 "name": null, 00:07:42.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.840 "is_configured": false, 00:07:42.840 "data_offset": 0, 00:07:42.840 "data_size": 63488 00:07:42.840 }, 00:07:42.840 { 00:07:42.840 "name": "BaseBdev2", 00:07:42.840 "uuid": "7c6deaf6-3a80-493b-9535-e36ea82a6021", 00:07:42.840 "is_configured": true, 00:07:42.840 "data_offset": 2048, 00:07:42.840 "data_size": 63488 00:07:42.840 } 00:07:42.840 ] 00:07:42.840 }' 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.840 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.410 [2024-11-21 04:53:59.887914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.410 [2024-11-21 04:53:59.888113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.410 [2024-11-21 04:53:59.909537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.410 [2024-11-21 04:53:59.909665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.410 [2024-11-21 04:53:59.909712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74366 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74366 ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74366 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74366 00:07:43.410 killing process with pid 74366 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74366' 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74366 00:07:43.410 [2024-11-21 04:53:59.996033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.410 04:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74366 00:07:43.410 [2024-11-21 04:53:59.997653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.671 04:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.671 00:07:43.671 real 0m4.033s 00:07:43.671 user 0m6.201s 00:07:43.671 sys 0m0.858s 00:07:43.671 04:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.671 04:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.671 ************************************ 00:07:43.671 END TEST raid_state_function_test_sb 00:07:43.671 ************************************ 00:07:43.671 04:54:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:43.671 04:54:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.671 04:54:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.671 04:54:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.671 ************************************ 00:07:43.671 START TEST raid_superblock_test 00:07:43.671 ************************************ 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:43.671 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74607 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74607 00:07:43.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74607 ']' 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.931 04:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 [2024-11-21 04:54:00.480078] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:43.931 [2024-11-21 04:54:00.480312] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:07:43.931 [2024-11-21 04:54:00.650280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.191 [2024-11-21 04:54:00.692389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.191 [2024-11-21 04:54:00.769343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.191 [2024-11-21 04:54:00.769492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.761 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.761 malloc1 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 [2024-11-21 04:54:01.368296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.762 [2024-11-21 04:54:01.368375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.762 [2024-11-21 04:54:01.368398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:44.762 [2024-11-21 04:54:01.368414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.762 [2024-11-21 04:54:01.370962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.762 [2024-11-21 04:54:01.371014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.762 pt1 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 malloc2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 [2024-11-21 04:54:01.403138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.762 [2024-11-21 04:54:01.403294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.762 [2024-11-21 04:54:01.403329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:44.762 [2024-11-21 04:54:01.403359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.762 [2024-11-21 04:54:01.405827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.762 [2024-11-21 04:54:01.405898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.762 pt2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 [2024-11-21 04:54:01.415163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.762 [2024-11-21 04:54:01.417418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.762 [2024-11-21 04:54:01.417616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:44.762 [2024-11-21 04:54:01.417665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.762 [2024-11-21 04:54:01.418008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:44.762 [2024-11-21 04:54:01.418208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:44.762 [2024-11-21 04:54:01.418254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:44.762 [2024-11-21 04:54:01.418459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.762 "name": "raid_bdev1", 00:07:44.762 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:44.762 "strip_size_kb": 0, 00:07:44.762 "state": "online", 00:07:44.762 "raid_level": "raid1", 00:07:44.762 "superblock": true, 00:07:44.762 "num_base_bdevs": 2, 00:07:44.762 "num_base_bdevs_discovered": 2, 00:07:44.762 "num_base_bdevs_operational": 2, 00:07:44.762 "base_bdevs_list": [ 00:07:44.762 { 00:07:44.762 "name": "pt1", 00:07:44.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.762 "is_configured": true, 00:07:44.762 "data_offset": 2048, 00:07:44.762 "data_size": 63488 00:07:44.762 }, 00:07:44.762 { 00:07:44.762 "name": "pt2", 00:07:44.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.762 "is_configured": true, 00:07:44.762 "data_offset": 2048, 00:07:44.762 "data_size": 63488 00:07:44.762 } 00:07:44.762 ] 00:07:44.762 }' 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.762 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.331 [2024-11-21 04:54:01.854635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.331 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.331 "name": "raid_bdev1", 00:07:45.331 "aliases": [ 00:07:45.331 "95f2db15-ecf5-4cb4-bf87-32b5cadea748" 00:07:45.331 ], 00:07:45.331 "product_name": "Raid Volume", 00:07:45.331 "block_size": 512, 00:07:45.331 "num_blocks": 63488, 00:07:45.331 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:45.331 "assigned_rate_limits": { 00:07:45.331 "rw_ios_per_sec": 0, 00:07:45.331 "rw_mbytes_per_sec": 0, 00:07:45.331 "r_mbytes_per_sec": 0, 00:07:45.331 "w_mbytes_per_sec": 0 00:07:45.331 }, 00:07:45.331 "claimed": false, 00:07:45.331 "zoned": false, 00:07:45.331 "supported_io_types": { 00:07:45.331 "read": true, 00:07:45.331 "write": true, 00:07:45.332 "unmap": false, 00:07:45.332 "flush": false, 00:07:45.332 "reset": true, 00:07:45.332 "nvme_admin": false, 00:07:45.332 "nvme_io": false, 00:07:45.332 "nvme_io_md": false, 00:07:45.332 "write_zeroes": true, 00:07:45.332 "zcopy": false, 00:07:45.332 "get_zone_info": false, 00:07:45.332 "zone_management": false, 00:07:45.332 "zone_append": false, 00:07:45.332 "compare": false, 00:07:45.332 "compare_and_write": false, 00:07:45.332 "abort": false, 00:07:45.332 "seek_hole": false, 00:07:45.332 "seek_data": false, 00:07:45.332 "copy": false, 00:07:45.332 "nvme_iov_md": false 00:07:45.332 }, 00:07:45.332 "memory_domains": [ 00:07:45.332 { 00:07:45.332 "dma_device_id": "system", 00:07:45.332 "dma_device_type": 1 00:07:45.332 }, 00:07:45.332 { 00:07:45.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.332 "dma_device_type": 2 00:07:45.332 }, 00:07:45.332 { 00:07:45.332 "dma_device_id": "system", 00:07:45.332 "dma_device_type": 1 00:07:45.332 }, 00:07:45.332 { 00:07:45.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.332 "dma_device_type": 2 00:07:45.332 } 00:07:45.332 ], 00:07:45.332 "driver_specific": { 00:07:45.332 "raid": { 00:07:45.332 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:45.332 "strip_size_kb": 0, 00:07:45.332 "state": "online", 00:07:45.332 "raid_level": "raid1", 00:07:45.332 "superblock": true, 00:07:45.332 "num_base_bdevs": 2, 00:07:45.332 "num_base_bdevs_discovered": 2, 00:07:45.332 "num_base_bdevs_operational": 2, 00:07:45.332 "base_bdevs_list": [ 00:07:45.332 { 00:07:45.332 "name": "pt1", 00:07:45.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.332 "is_configured": true, 00:07:45.332 "data_offset": 2048, 00:07:45.332 "data_size": 63488 00:07:45.332 }, 00:07:45.332 { 00:07:45.332 "name": "pt2", 00:07:45.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.332 "is_configured": true, 00:07:45.332 "data_offset": 2048, 00:07:45.332 "data_size": 63488 00:07:45.332 } 00:07:45.332 ] 00:07:45.332 } 00:07:45.332 } 00:07:45.332 }' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:45.332 pt2' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.332 04:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.332 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 [2024-11-21 04:54:02.098176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95f2db15-ecf5-4cb4-bf87-32b5cadea748 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 95f2db15-ecf5-4cb4-bf87-32b5cadea748 ']' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 [2024-11-21 04:54:02.145840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.592 [2024-11-21 04:54:02.145866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.592 [2024-11-21 04:54:02.145937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.592 [2024-11-21 04:54:02.145997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.592 [2024-11-21 04:54:02.146013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:45.592 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.593 [2024-11-21 04:54:02.285622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:45.593 [2024-11-21 04:54:02.287812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:45.593 [2024-11-21 04:54:02.287869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:45.593 [2024-11-21 04:54:02.287907] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:45.593 [2024-11-21 04:54:02.287922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.593 [2024-11-21 04:54:02.287930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:45.593 request: 00:07:45.593 { 00:07:45.593 "name": "raid_bdev1", 00:07:45.593 "raid_level": "raid1", 00:07:45.593 "base_bdevs": [ 00:07:45.593 "malloc1", 00:07:45.593 "malloc2" 00:07:45.593 ], 00:07:45.593 "superblock": false, 00:07:45.593 "method": "bdev_raid_create", 00:07:45.593 "req_id": 1 00:07:45.593 } 00:07:45.593 Got JSON-RPC error response 00:07:45.593 response: 00:07:45.593 { 00:07:45.593 "code": -17, 00:07:45.593 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:45.593 } 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.593 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.853 [2024-11-21 04:54:02.349458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.853 [2024-11-21 04:54:02.349546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.853 [2024-11-21 04:54:02.349580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:45.853 [2024-11-21 04:54:02.349606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.853 [2024-11-21 04:54:02.352117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.853 [2024-11-21 04:54:02.352184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.853 [2024-11-21 04:54:02.352268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:45.853 [2024-11-21 04:54:02.352327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.853 pt1 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.853 "name": "raid_bdev1", 00:07:45.853 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:45.853 "strip_size_kb": 0, 00:07:45.853 "state": "configuring", 00:07:45.853 "raid_level": "raid1", 00:07:45.853 "superblock": true, 00:07:45.853 "num_base_bdevs": 2, 00:07:45.853 "num_base_bdevs_discovered": 1, 00:07:45.853 "num_base_bdevs_operational": 2, 00:07:45.853 "base_bdevs_list": [ 00:07:45.853 { 00:07:45.853 "name": "pt1", 00:07:45.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.853 "is_configured": true, 00:07:45.853 "data_offset": 2048, 00:07:45.853 "data_size": 63488 00:07:45.853 }, 00:07:45.853 { 00:07:45.853 "name": null, 00:07:45.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.853 "is_configured": false, 00:07:45.853 "data_offset": 2048, 00:07:45.853 "data_size": 63488 00:07:45.853 } 00:07:45.853 ] 00:07:45.853 }' 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.853 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 [2024-11-21 04:54:02.760778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.113 [2024-11-21 04:54:02.760840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.113 [2024-11-21 04:54:02.760865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:46.113 [2024-11-21 04:54:02.760874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.113 [2024-11-21 04:54:02.761327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.113 [2024-11-21 04:54:02.761380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.113 [2024-11-21 04:54:02.761453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:46.113 [2024-11-21 04:54:02.761481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.113 [2024-11-21 04:54:02.761578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:46.113 [2024-11-21 04:54:02.761586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:46.113 [2024-11-21 04:54:02.761846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:46.113 [2024-11-21 04:54:02.761975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:46.113 [2024-11-21 04:54:02.761991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:46.113 [2024-11-21 04:54:02.762111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.113 pt2 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.113 "name": "raid_bdev1", 00:07:46.113 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:46.113 "strip_size_kb": 0, 00:07:46.113 "state": "online", 00:07:46.113 "raid_level": "raid1", 00:07:46.113 "superblock": true, 00:07:46.113 "num_base_bdevs": 2, 00:07:46.113 "num_base_bdevs_discovered": 2, 00:07:46.113 "num_base_bdevs_operational": 2, 00:07:46.113 "base_bdevs_list": [ 00:07:46.113 { 00:07:46.113 "name": "pt1", 00:07:46.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.113 "is_configured": true, 00:07:46.113 "data_offset": 2048, 00:07:46.113 "data_size": 63488 00:07:46.113 }, 00:07:46.113 { 00:07:46.113 "name": "pt2", 00:07:46.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.113 "is_configured": true, 00:07:46.113 "data_offset": 2048, 00:07:46.113 "data_size": 63488 00:07:46.113 } 00:07:46.113 ] 00:07:46.113 }' 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.113 04:54:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.683 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.683 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.683 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 [2024-11-21 04:54:03.216247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.684 "name": "raid_bdev1", 00:07:46.684 "aliases": [ 00:07:46.684 "95f2db15-ecf5-4cb4-bf87-32b5cadea748" 00:07:46.684 ], 00:07:46.684 "product_name": "Raid Volume", 00:07:46.684 "block_size": 512, 00:07:46.684 "num_blocks": 63488, 00:07:46.684 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:46.684 "assigned_rate_limits": { 00:07:46.684 "rw_ios_per_sec": 0, 00:07:46.684 "rw_mbytes_per_sec": 0, 00:07:46.684 "r_mbytes_per_sec": 0, 00:07:46.684 "w_mbytes_per_sec": 0 00:07:46.684 }, 00:07:46.684 "claimed": false, 00:07:46.684 "zoned": false, 00:07:46.684 "supported_io_types": { 00:07:46.684 "read": true, 00:07:46.684 "write": true, 00:07:46.684 "unmap": false, 00:07:46.684 "flush": false, 00:07:46.684 "reset": true, 00:07:46.684 "nvme_admin": false, 00:07:46.684 "nvme_io": false, 00:07:46.684 "nvme_io_md": false, 00:07:46.684 "write_zeroes": true, 00:07:46.684 "zcopy": false, 00:07:46.684 "get_zone_info": false, 00:07:46.684 "zone_management": false, 00:07:46.684 "zone_append": false, 00:07:46.684 "compare": false, 00:07:46.684 "compare_and_write": false, 00:07:46.684 "abort": false, 00:07:46.684 "seek_hole": false, 00:07:46.684 "seek_data": false, 00:07:46.684 "copy": false, 00:07:46.684 "nvme_iov_md": false 00:07:46.684 }, 00:07:46.684 "memory_domains": [ 00:07:46.684 { 00:07:46.684 "dma_device_id": "system", 00:07:46.684 "dma_device_type": 1 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.684 "dma_device_type": 2 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "dma_device_id": "system", 00:07:46.684 "dma_device_type": 1 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.684 "dma_device_type": 2 00:07:46.684 } 00:07:46.684 ], 00:07:46.684 "driver_specific": { 00:07:46.684 "raid": { 00:07:46.684 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:46.684 "strip_size_kb": 0, 00:07:46.684 "state": "online", 00:07:46.684 "raid_level": "raid1", 00:07:46.684 "superblock": true, 00:07:46.684 "num_base_bdevs": 2, 00:07:46.684 "num_base_bdevs_discovered": 2, 00:07:46.684 "num_base_bdevs_operational": 2, 00:07:46.684 "base_bdevs_list": [ 00:07:46.684 { 00:07:46.684 "name": "pt1", 00:07:46.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.684 "is_configured": true, 00:07:46.684 "data_offset": 2048, 00:07:46.684 "data_size": 63488 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "name": "pt2", 00:07:46.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.684 "is_configured": true, 00:07:46.684 "data_offset": 2048, 00:07:46.684 "data_size": 63488 00:07:46.684 } 00:07:46.684 ] 00:07:46.684 } 00:07:46.684 } 00:07:46.684 }' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.684 pt2' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.945 [2024-11-21 04:54:03.459916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 95f2db15-ecf5-4cb4-bf87-32b5cadea748 '!=' 95f2db15-ecf5-4cb4-bf87-32b5cadea748 ']' 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.945 [2024-11-21 04:54:03.503617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.945 "name": "raid_bdev1", 00:07:46.945 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:46.945 "strip_size_kb": 0, 00:07:46.945 "state": "online", 00:07:46.945 "raid_level": "raid1", 00:07:46.945 "superblock": true, 00:07:46.945 "num_base_bdevs": 2, 00:07:46.945 "num_base_bdevs_discovered": 1, 00:07:46.945 "num_base_bdevs_operational": 1, 00:07:46.945 "base_bdevs_list": [ 00:07:46.945 { 00:07:46.945 "name": null, 00:07:46.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.945 "is_configured": false, 00:07:46.945 "data_offset": 0, 00:07:46.945 "data_size": 63488 00:07:46.945 }, 00:07:46.945 { 00:07:46.945 "name": "pt2", 00:07:46.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.945 "is_configured": true, 00:07:46.945 "data_offset": 2048, 00:07:46.945 "data_size": 63488 00:07:46.945 } 00:07:46.945 ] 00:07:46.945 }' 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.945 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.204 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.205 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.205 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.205 [2024-11-21 04:54:03.934774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.205 [2024-11-21 04:54:03.934908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.205 [2024-11-21 04:54:03.935075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.205 [2024-11-21 04:54:03.935195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.205 [2024-11-21 04:54:03.935258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.500 04:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.500 [2024-11-21 04:54:04.022624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.500 [2024-11-21 04:54:04.022777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.500 [2024-11-21 04:54:04.022819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:47.500 [2024-11-21 04:54:04.022852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.500 [2024-11-21 04:54:04.025569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.500 [2024-11-21 04:54:04.025649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.500 [2024-11-21 04:54:04.025779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.500 [2024-11-21 04:54:04.025849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.500 [2024-11-21 04:54:04.026001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:47.500 [2024-11-21 04:54:04.026037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.500 [2024-11-21 04:54:04.026333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.500 [2024-11-21 04:54:04.026518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:47.500 [2024-11-21 04:54:04.026565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:47.500 [2024-11-21 04:54:04.026792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.500 pt2 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.500 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.500 "name": "raid_bdev1", 00:07:47.500 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:47.500 "strip_size_kb": 0, 00:07:47.500 "state": "online", 00:07:47.500 "raid_level": "raid1", 00:07:47.500 "superblock": true, 00:07:47.500 "num_base_bdevs": 2, 00:07:47.500 "num_base_bdevs_discovered": 1, 00:07:47.500 "num_base_bdevs_operational": 1, 00:07:47.500 "base_bdevs_list": [ 00:07:47.500 { 00:07:47.500 "name": null, 00:07:47.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.500 "is_configured": false, 00:07:47.500 "data_offset": 2048, 00:07:47.500 "data_size": 63488 00:07:47.500 }, 00:07:47.500 { 00:07:47.500 "name": "pt2", 00:07:47.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.501 "is_configured": true, 00:07:47.501 "data_offset": 2048, 00:07:47.501 "data_size": 63488 00:07:47.501 } 00:07:47.501 ] 00:07:47.501 }' 00:07:47.501 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.501 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 [2024-11-21 04:54:04.422159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.760 [2024-11-21 04:54:04.422203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.760 [2024-11-21 04:54:04.422302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.760 [2024-11-21 04:54:04.422360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.760 [2024-11-21 04:54:04.422373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.760 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 [2024-11-21 04:54:04.485957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.760 [2024-11-21 04:54:04.486085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.760 [2024-11-21 04:54:04.486139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:47.760 [2024-11-21 04:54:04.486214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.760 [2024-11-21 04:54:04.488815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.761 [2024-11-21 04:54:04.488853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.761 [2024-11-21 04:54:04.488932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.761 [2024-11-21 04:54:04.488974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.761 [2024-11-21 04:54:04.489086] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:47.761 [2024-11-21 04:54:04.489118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.761 [2024-11-21 04:54:04.489161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:47.761 [2024-11-21 04:54:04.489199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.761 [2024-11-21 04:54:04.489289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:47.761 [2024-11-21 04:54:04.489300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.761 [2024-11-21 04:54:04.489533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:47.761 [2024-11-21 04:54:04.489654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:47.761 [2024-11-21 04:54:04.489664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:47.761 [2024-11-21 04:54:04.489777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.020 pt1 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.020 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.020 "name": "raid_bdev1", 00:07:48.020 "uuid": "95f2db15-ecf5-4cb4-bf87-32b5cadea748", 00:07:48.020 "strip_size_kb": 0, 00:07:48.020 "state": "online", 00:07:48.020 "raid_level": "raid1", 00:07:48.020 "superblock": true, 00:07:48.020 "num_base_bdevs": 2, 00:07:48.020 "num_base_bdevs_discovered": 1, 00:07:48.020 "num_base_bdevs_operational": 1, 00:07:48.020 "base_bdevs_list": [ 00:07:48.020 { 00:07:48.020 "name": null, 00:07:48.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.020 "is_configured": false, 00:07:48.020 "data_offset": 2048, 00:07:48.020 "data_size": 63488 00:07:48.020 }, 00:07:48.020 { 00:07:48.020 "name": "pt2", 00:07:48.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.020 "is_configured": true, 00:07:48.020 "data_offset": 2048, 00:07:48.020 "data_size": 63488 00:07:48.020 } 00:07:48.020 ] 00:07:48.020 }' 00:07:48.021 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.021 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.281 [2024-11-21 04:54:04.977404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.281 04:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 95f2db15-ecf5-4cb4-bf87-32b5cadea748 '!=' 95f2db15-ecf5-4cb4-bf87-32b5cadea748 ']' 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74607 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74607 ']' 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74607 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:48.281 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74607 00:07:48.540 killing process with pid 74607 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74607' 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74607 00:07:48.540 [2024-11-21 04:54:05.046634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.540 [2024-11-21 04:54:05.046727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.540 [2024-11-21 04:54:05.046785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.540 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74607 00:07:48.540 [2024-11-21 04:54:05.046794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:48.540 [2024-11-21 04:54:05.089135] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.801 04:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:48.801 00:07:48.801 real 0m5.014s 00:07:48.801 user 0m8.038s 00:07:48.801 sys 0m1.138s 00:07:48.801 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.801 04:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.801 ************************************ 00:07:48.801 END TEST raid_superblock_test 00:07:48.801 ************************************ 00:07:48.801 04:54:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:48.801 04:54:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.801 04:54:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.801 04:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.801 ************************************ 00:07:48.801 START TEST raid_read_error_test 00:07:48.801 ************************************ 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4KRWUVKgMX 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74926 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74926 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74926 ']' 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.801 04:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.061 [2024-11-21 04:54:05.582521] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:49.061 [2024-11-21 04:54:05.582646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74926 ] 00:07:49.061 [2024-11-21 04:54:05.750157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.320 [2024-11-21 04:54:05.793942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.320 [2024-11-21 04:54:05.869879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.320 [2024-11-21 04:54:05.869926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 BaseBdev1_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 true 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 [2024-11-21 04:54:06.452285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.890 [2024-11-21 04:54:06.452352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.890 [2024-11-21 04:54:06.452380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:49.890 [2024-11-21 04:54:06.452389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.890 [2024-11-21 04:54:06.454887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.890 [2024-11-21 04:54:06.454925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.890 BaseBdev1 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 BaseBdev2_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 true 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 [2024-11-21 04:54:06.498915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:49.890 [2024-11-21 04:54:06.499047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.890 [2024-11-21 04:54:06.499071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:49.890 [2024-11-21 04:54:06.499080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.890 [2024-11-21 04:54:06.501490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.890 [2024-11-21 04:54:06.501527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:49.890 BaseBdev2 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 [2024-11-21 04:54:06.510965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.890 [2024-11-21 04:54:06.513128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.890 [2024-11-21 04:54:06.513419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:49.890 [2024-11-21 04:54:06.513442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:49.890 [2024-11-21 04:54:06.513703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:49.890 [2024-11-21 04:54:06.513865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:49.890 [2024-11-21 04:54:06.513878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:49.890 [2024-11-21 04:54:06.514007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.890 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.890 "name": "raid_bdev1", 00:07:49.890 "uuid": "96aeed71-4e95-4e7b-8f20-dd9336a20a73", 00:07:49.890 "strip_size_kb": 0, 00:07:49.890 "state": "online", 00:07:49.891 "raid_level": "raid1", 00:07:49.891 "superblock": true, 00:07:49.891 "num_base_bdevs": 2, 00:07:49.891 "num_base_bdevs_discovered": 2, 00:07:49.891 "num_base_bdevs_operational": 2, 00:07:49.891 "base_bdevs_list": [ 00:07:49.891 { 00:07:49.891 "name": "BaseBdev1", 00:07:49.891 "uuid": "c092be99-3255-54db-af56-408d067d2f7d", 00:07:49.891 "is_configured": true, 00:07:49.891 "data_offset": 2048, 00:07:49.891 "data_size": 63488 00:07:49.891 }, 00:07:49.891 { 00:07:49.891 "name": "BaseBdev2", 00:07:49.891 "uuid": "5229f12f-49d4-562c-8663-26c3629d7a51", 00:07:49.891 "is_configured": true, 00:07:49.891 "data_offset": 2048, 00:07:49.891 "data_size": 63488 00:07:49.891 } 00:07:49.891 ] 00:07:49.891 }' 00:07:49.891 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.891 04:54:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.460 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.460 04:54:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:50.460 [2024-11-21 04:54:07.050564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:51.398 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.399 04:54:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.399 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.399 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.399 "name": "raid_bdev1", 00:07:51.399 "uuid": "96aeed71-4e95-4e7b-8f20-dd9336a20a73", 00:07:51.399 "strip_size_kb": 0, 00:07:51.399 "state": "online", 00:07:51.399 "raid_level": "raid1", 00:07:51.399 "superblock": true, 00:07:51.399 "num_base_bdevs": 2, 00:07:51.399 "num_base_bdevs_discovered": 2, 00:07:51.399 "num_base_bdevs_operational": 2, 00:07:51.399 "base_bdevs_list": [ 00:07:51.399 { 00:07:51.399 "name": "BaseBdev1", 00:07:51.399 "uuid": "c092be99-3255-54db-af56-408d067d2f7d", 00:07:51.399 "is_configured": true, 00:07:51.399 "data_offset": 2048, 00:07:51.399 "data_size": 63488 00:07:51.399 }, 00:07:51.399 { 00:07:51.399 "name": "BaseBdev2", 00:07:51.399 "uuid": "5229f12f-49d4-562c-8663-26c3629d7a51", 00:07:51.399 "is_configured": true, 00:07:51.399 "data_offset": 2048, 00:07:51.399 "data_size": 63488 00:07:51.399 } 00:07:51.399 ] 00:07:51.399 }' 00:07:51.399 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.399 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.968 [2024-11-21 04:54:08.445925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.968 [2024-11-21 04:54:08.445974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.968 [2024-11-21 04:54:08.448552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.968 [2024-11-21 04:54:08.448679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.968 [2024-11-21 04:54:08.448780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.968 [2024-11-21 04:54:08.448823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:51.968 { 00:07:51.968 "results": [ 00:07:51.968 { 00:07:51.968 "job": "raid_bdev1", 00:07:51.968 "core_mask": "0x1", 00:07:51.968 "workload": "randrw", 00:07:51.968 "percentage": 50, 00:07:51.968 "status": "finished", 00:07:51.968 "queue_depth": 1, 00:07:51.968 "io_size": 131072, 00:07:51.968 "runtime": 1.395678, 00:07:51.968 "iops": 14785.64539958357, 00:07:51.968 "mibps": 1848.2056749479464, 00:07:51.968 "io_failed": 0, 00:07:51.968 "io_timeout": 0, 00:07:51.968 "avg_latency_us": 65.1590614951105, 00:07:51.968 "min_latency_us": 21.799126637554586, 00:07:51.968 "max_latency_us": 1416.6078602620087 00:07:51.968 } 00:07:51.968 ], 00:07:51.968 "core_count": 1 00:07:51.968 } 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74926 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74926 ']' 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74926 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74926 00:07:51.968 killing process with pid 74926 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74926' 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74926 00:07:51.968 [2024-11-21 04:54:08.498762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.968 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74926 00:07:51.968 [2024-11-21 04:54:08.528401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4KRWUVKgMX 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.228 ************************************ 00:07:52.228 END TEST raid_read_error_test 00:07:52.228 ************************************ 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:52.228 00:07:52.228 real 0m3.377s 00:07:52.228 user 0m4.194s 00:07:52.228 sys 0m0.589s 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.228 04:54:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.228 04:54:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:52.228 04:54:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.228 04:54:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.228 04:54:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.228 ************************************ 00:07:52.228 START TEST raid_write_error_test 00:07:52.228 ************************************ 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nl6Ns8VEK5 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75055 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75055 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75055 ']' 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.228 04:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.488 [2024-11-21 04:54:09.029566] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:52.488 [2024-11-21 04:54:09.029774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75055 ] 00:07:52.488 [2024-11-21 04:54:09.196909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.746 [2024-11-21 04:54:09.239559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.746 [2024-11-21 04:54:09.316267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.746 [2024-11-21 04:54:09.316405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 BaseBdev1_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 true 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 [2024-11-21 04:54:09.894965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.316 [2024-11-21 04:54:09.895047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.316 [2024-11-21 04:54:09.895069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.316 [2024-11-21 04:54:09.895078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.316 [2024-11-21 04:54:09.897618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.316 [2024-11-21 04:54:09.897661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.316 BaseBdev1 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 BaseBdev2_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 true 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 [2024-11-21 04:54:09.942029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.316 [2024-11-21 04:54:09.942116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.316 [2024-11-21 04:54:09.942156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:53.316 [2024-11-21 04:54:09.942165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.316 [2024-11-21 04:54:09.944634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.316 [2024-11-21 04:54:09.944676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.316 BaseBdev2 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 [2024-11-21 04:54:09.954063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.316 [2024-11-21 04:54:09.956317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.316 [2024-11-21 04:54:09.956527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:53.316 [2024-11-21 04:54:09.956541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.316 [2024-11-21 04:54:09.956827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:53.316 [2024-11-21 04:54:09.957003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:53.316 [2024-11-21 04:54:09.957017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:53.316 [2024-11-21 04:54:09.957196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.316 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.317 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.317 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.317 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.317 04:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.317 04:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.317 04:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.317 "name": "raid_bdev1", 00:07:53.317 "uuid": "23c4308e-f2e4-4865-9b38-50b829329a63", 00:07:53.317 "strip_size_kb": 0, 00:07:53.317 "state": "online", 00:07:53.317 "raid_level": "raid1", 00:07:53.317 "superblock": true, 00:07:53.317 "num_base_bdevs": 2, 00:07:53.317 "num_base_bdevs_discovered": 2, 00:07:53.317 "num_base_bdevs_operational": 2, 00:07:53.317 "base_bdevs_list": [ 00:07:53.317 { 00:07:53.317 "name": "BaseBdev1", 00:07:53.317 "uuid": "4abe51b9-b889-5c01-b084-76a03e0afd41", 00:07:53.317 "is_configured": true, 00:07:53.317 "data_offset": 2048, 00:07:53.317 "data_size": 63488 00:07:53.317 }, 00:07:53.317 { 00:07:53.317 "name": "BaseBdev2", 00:07:53.317 "uuid": "eff9975a-b015-57ad-8ea5-67c6288f479b", 00:07:53.317 "is_configured": true, 00:07:53.317 "data_offset": 2048, 00:07:53.317 "data_size": 63488 00:07:53.317 } 00:07:53.317 ] 00:07:53.317 }' 00:07:53.317 04:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.317 04:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.885 04:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.885 04:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.885 [2024-11-21 04:54:10.493644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.824 [2024-11-21 04:54:11.413569] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:54.824 [2024-11-21 04:54:11.413640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.824 [2024-11-21 04:54:11.413859] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.824 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.825 "name": "raid_bdev1", 00:07:54.825 "uuid": "23c4308e-f2e4-4865-9b38-50b829329a63", 00:07:54.825 "strip_size_kb": 0, 00:07:54.825 "state": "online", 00:07:54.825 "raid_level": "raid1", 00:07:54.825 "superblock": true, 00:07:54.825 "num_base_bdevs": 2, 00:07:54.825 "num_base_bdevs_discovered": 1, 00:07:54.825 "num_base_bdevs_operational": 1, 00:07:54.825 "base_bdevs_list": [ 00:07:54.825 { 00:07:54.825 "name": null, 00:07:54.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.825 "is_configured": false, 00:07:54.825 "data_offset": 0, 00:07:54.825 "data_size": 63488 00:07:54.825 }, 00:07:54.825 { 00:07:54.825 "name": "BaseBdev2", 00:07:54.825 "uuid": "eff9975a-b015-57ad-8ea5-67c6288f479b", 00:07:54.825 "is_configured": true, 00:07:54.825 "data_offset": 2048, 00:07:54.825 "data_size": 63488 00:07:54.825 } 00:07:54.825 ] 00:07:54.825 }' 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.825 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.394 [2024-11-21 04:54:11.854638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.394 [2024-11-21 04:54:11.854780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.394 [2024-11-21 04:54:11.857386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.394 [2024-11-21 04:54:11.857497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.394 [2024-11-21 04:54:11.857609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.394 [2024-11-21 04:54:11.857691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:55.394 { 00:07:55.394 "results": [ 00:07:55.394 { 00:07:55.394 "job": "raid_bdev1", 00:07:55.394 "core_mask": "0x1", 00:07:55.394 "workload": "randrw", 00:07:55.394 "percentage": 50, 00:07:55.394 "status": "finished", 00:07:55.394 "queue_depth": 1, 00:07:55.394 "io_size": 131072, 00:07:55.394 "runtime": 1.361412, 00:07:55.394 "iops": 18039.359135955903, 00:07:55.394 "mibps": 2254.919891994488, 00:07:55.394 "io_failed": 0, 00:07:55.394 "io_timeout": 0, 00:07:55.394 "avg_latency_us": 52.8880763568919, 00:07:55.394 "min_latency_us": 22.134497816593885, 00:07:55.394 "max_latency_us": 1352.216593886463 00:07:55.394 } 00:07:55.394 ], 00:07:55.394 "core_count": 1 00:07:55.394 } 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75055 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75055 ']' 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75055 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75055 00:07:55.394 killing process with pid 75055 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75055' 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75055 00:07:55.394 [2024-11-21 04:54:11.905528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.394 04:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75055 00:07:55.394 [2024-11-21 04:54:11.933822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nl6Ns8VEK5 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.653 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:55.654 04:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:55.654 00:07:55.654 real 0m3.334s 00:07:55.654 user 0m4.152s 00:07:55.654 sys 0m0.560s 00:07:55.654 04:54:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.654 ************************************ 00:07:55.654 END TEST raid_write_error_test 00:07:55.654 ************************************ 00:07:55.654 04:54:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.654 04:54:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:55.654 04:54:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:55.654 04:54:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:55.654 04:54:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.654 04:54:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.654 04:54:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.654 ************************************ 00:07:55.654 START TEST raid_state_function_test 00:07:55.654 ************************************ 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75188 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75188' 00:07:55.654 Process raid pid: 75188 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75188 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75188 ']' 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.654 04:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.913 [2024-11-21 04:54:12.427142] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:55.914 [2024-11-21 04:54:12.427342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.914 [2024-11-21 04:54:12.575671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.914 [2024-11-21 04:54:12.618986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.172 [2024-11-21 04:54:12.694831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.172 [2024-11-21 04:54:12.694954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.741 [2024-11-21 04:54:13.270033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.741 [2024-11-21 04:54:13.270227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.741 [2024-11-21 04:54:13.270259] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.741 [2024-11-21 04:54:13.270285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.741 [2024-11-21 04:54:13.270307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.741 [2024-11-21 04:54:13.270332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.741 "name": "Existed_Raid", 00:07:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.741 "strip_size_kb": 64, 00:07:56.741 "state": "configuring", 00:07:56.741 "raid_level": "raid0", 00:07:56.741 "superblock": false, 00:07:56.741 "num_base_bdevs": 3, 00:07:56.741 "num_base_bdevs_discovered": 0, 00:07:56.741 "num_base_bdevs_operational": 3, 00:07:56.741 "base_bdevs_list": [ 00:07:56.741 { 00:07:56.741 "name": "BaseBdev1", 00:07:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.741 "is_configured": false, 00:07:56.741 "data_offset": 0, 00:07:56.741 "data_size": 0 00:07:56.741 }, 00:07:56.741 { 00:07:56.741 "name": "BaseBdev2", 00:07:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.741 "is_configured": false, 00:07:56.741 "data_offset": 0, 00:07:56.741 "data_size": 0 00:07:56.741 }, 00:07:56.741 { 00:07:56.741 "name": "BaseBdev3", 00:07:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.741 "is_configured": false, 00:07:56.741 "data_offset": 0, 00:07:56.741 "data_size": 0 00:07:56.741 } 00:07:56.741 ] 00:07:56.741 }' 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.741 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 [2024-11-21 04:54:13.693227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.000 [2024-11-21 04:54:13.693347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 [2024-11-21 04:54:13.701227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.000 [2024-11-21 04:54:13.701377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.000 [2024-11-21 04:54:13.701402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.000 [2024-11-21 04:54:13.701412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.000 [2024-11-21 04:54:13.701419] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.000 [2024-11-21 04:54:13.701428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 [2024-11-21 04:54:13.724508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.000 BaseBdev1 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.000 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.259 [ 00:07:57.259 { 00:07:57.259 "name": "BaseBdev1", 00:07:57.259 "aliases": [ 00:07:57.259 "35b5f6ae-539b-461b-8ec8-d243e06a68c1" 00:07:57.259 ], 00:07:57.259 "product_name": "Malloc disk", 00:07:57.259 "block_size": 512, 00:07:57.259 "num_blocks": 65536, 00:07:57.259 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:57.259 "assigned_rate_limits": { 00:07:57.259 "rw_ios_per_sec": 0, 00:07:57.259 "rw_mbytes_per_sec": 0, 00:07:57.259 "r_mbytes_per_sec": 0, 00:07:57.259 "w_mbytes_per_sec": 0 00:07:57.259 }, 00:07:57.259 "claimed": true, 00:07:57.259 "claim_type": "exclusive_write", 00:07:57.259 "zoned": false, 00:07:57.259 "supported_io_types": { 00:07:57.259 "read": true, 00:07:57.259 "write": true, 00:07:57.259 "unmap": true, 00:07:57.259 "flush": true, 00:07:57.259 "reset": true, 00:07:57.259 "nvme_admin": false, 00:07:57.259 "nvme_io": false, 00:07:57.259 "nvme_io_md": false, 00:07:57.259 "write_zeroes": true, 00:07:57.259 "zcopy": true, 00:07:57.259 "get_zone_info": false, 00:07:57.259 "zone_management": false, 00:07:57.259 "zone_append": false, 00:07:57.259 "compare": false, 00:07:57.259 "compare_and_write": false, 00:07:57.259 "abort": true, 00:07:57.259 "seek_hole": false, 00:07:57.259 "seek_data": false, 00:07:57.259 "copy": true, 00:07:57.259 "nvme_iov_md": false 00:07:57.259 }, 00:07:57.259 "memory_domains": [ 00:07:57.259 { 00:07:57.259 "dma_device_id": "system", 00:07:57.259 "dma_device_type": 1 00:07:57.259 }, 00:07:57.259 { 00:07:57.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.259 "dma_device_type": 2 00:07:57.259 } 00:07:57.259 ], 00:07:57.259 "driver_specific": {} 00:07:57.259 } 00:07:57.259 ] 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.259 "name": "Existed_Raid", 00:07:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.259 "strip_size_kb": 64, 00:07:57.259 "state": "configuring", 00:07:57.259 "raid_level": "raid0", 00:07:57.259 "superblock": false, 00:07:57.259 "num_base_bdevs": 3, 00:07:57.259 "num_base_bdevs_discovered": 1, 00:07:57.259 "num_base_bdevs_operational": 3, 00:07:57.259 "base_bdevs_list": [ 00:07:57.259 { 00:07:57.259 "name": "BaseBdev1", 00:07:57.259 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:57.259 "is_configured": true, 00:07:57.259 "data_offset": 0, 00:07:57.259 "data_size": 65536 00:07:57.259 }, 00:07:57.259 { 00:07:57.259 "name": "BaseBdev2", 00:07:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.259 "is_configured": false, 00:07:57.259 "data_offset": 0, 00:07:57.259 "data_size": 0 00:07:57.259 }, 00:07:57.259 { 00:07:57.259 "name": "BaseBdev3", 00:07:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.259 "is_configured": false, 00:07:57.259 "data_offset": 0, 00:07:57.259 "data_size": 0 00:07:57.259 } 00:07:57.259 ] 00:07:57.259 }' 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.259 04:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.518 [2024-11-21 04:54:14.183787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.518 [2024-11-21 04:54:14.183870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.518 [2024-11-21 04:54:14.195793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.518 [2024-11-21 04:54:14.198194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.518 [2024-11-21 04:54:14.198244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.518 [2024-11-21 04:54:14.198254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.518 [2024-11-21 04:54:14.198264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.518 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.776 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.776 "name": "Existed_Raid", 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.776 "strip_size_kb": 64, 00:07:57.776 "state": "configuring", 00:07:57.776 "raid_level": "raid0", 00:07:57.776 "superblock": false, 00:07:57.776 "num_base_bdevs": 3, 00:07:57.776 "num_base_bdevs_discovered": 1, 00:07:57.776 "num_base_bdevs_operational": 3, 00:07:57.776 "base_bdevs_list": [ 00:07:57.776 { 00:07:57.776 "name": "BaseBdev1", 00:07:57.776 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:57.776 "is_configured": true, 00:07:57.776 "data_offset": 0, 00:07:57.776 "data_size": 65536 00:07:57.776 }, 00:07:57.776 { 00:07:57.776 "name": "BaseBdev2", 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.776 "is_configured": false, 00:07:57.776 "data_offset": 0, 00:07:57.776 "data_size": 0 00:07:57.776 }, 00:07:57.776 { 00:07:57.776 "name": "BaseBdev3", 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.776 "is_configured": false, 00:07:57.776 "data_offset": 0, 00:07:57.776 "data_size": 0 00:07:57.776 } 00:07:57.776 ] 00:07:57.776 }' 00:07:57.776 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.776 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.034 [2024-11-21 04:54:14.588127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.034 BaseBdev2 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.034 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.034 [ 00:07:58.034 { 00:07:58.034 "name": "BaseBdev2", 00:07:58.034 "aliases": [ 00:07:58.034 "2caa279b-4146-46c2-8737-b14b52f606c8" 00:07:58.034 ], 00:07:58.034 "product_name": "Malloc disk", 00:07:58.034 "block_size": 512, 00:07:58.034 "num_blocks": 65536, 00:07:58.034 "uuid": "2caa279b-4146-46c2-8737-b14b52f606c8", 00:07:58.034 "assigned_rate_limits": { 00:07:58.034 "rw_ios_per_sec": 0, 00:07:58.034 "rw_mbytes_per_sec": 0, 00:07:58.034 "r_mbytes_per_sec": 0, 00:07:58.034 "w_mbytes_per_sec": 0 00:07:58.034 }, 00:07:58.034 "claimed": true, 00:07:58.034 "claim_type": "exclusive_write", 00:07:58.034 "zoned": false, 00:07:58.034 "supported_io_types": { 00:07:58.034 "read": true, 00:07:58.034 "write": true, 00:07:58.034 "unmap": true, 00:07:58.034 "flush": true, 00:07:58.034 "reset": true, 00:07:58.034 "nvme_admin": false, 00:07:58.034 "nvme_io": false, 00:07:58.034 "nvme_io_md": false, 00:07:58.034 "write_zeroes": true, 00:07:58.034 "zcopy": true, 00:07:58.034 "get_zone_info": false, 00:07:58.034 "zone_management": false, 00:07:58.034 "zone_append": false, 00:07:58.034 "compare": false, 00:07:58.034 "compare_and_write": false, 00:07:58.034 "abort": true, 00:07:58.034 "seek_hole": false, 00:07:58.034 "seek_data": false, 00:07:58.034 "copy": true, 00:07:58.034 "nvme_iov_md": false 00:07:58.034 }, 00:07:58.034 "memory_domains": [ 00:07:58.034 { 00:07:58.035 "dma_device_id": "system", 00:07:58.035 "dma_device_type": 1 00:07:58.035 }, 00:07:58.035 { 00:07:58.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.035 "dma_device_type": 2 00:07:58.035 } 00:07:58.035 ], 00:07:58.035 "driver_specific": {} 00:07:58.035 } 00:07:58.035 ] 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.035 "name": "Existed_Raid", 00:07:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.035 "strip_size_kb": 64, 00:07:58.035 "state": "configuring", 00:07:58.035 "raid_level": "raid0", 00:07:58.035 "superblock": false, 00:07:58.035 "num_base_bdevs": 3, 00:07:58.035 "num_base_bdevs_discovered": 2, 00:07:58.035 "num_base_bdevs_operational": 3, 00:07:58.035 "base_bdevs_list": [ 00:07:58.035 { 00:07:58.035 "name": "BaseBdev1", 00:07:58.035 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:58.035 "is_configured": true, 00:07:58.035 "data_offset": 0, 00:07:58.035 "data_size": 65536 00:07:58.035 }, 00:07:58.035 { 00:07:58.035 "name": "BaseBdev2", 00:07:58.035 "uuid": "2caa279b-4146-46c2-8737-b14b52f606c8", 00:07:58.035 "is_configured": true, 00:07:58.035 "data_offset": 0, 00:07:58.035 "data_size": 65536 00:07:58.035 }, 00:07:58.035 { 00:07:58.035 "name": "BaseBdev3", 00:07:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.035 "is_configured": false, 00:07:58.035 "data_offset": 0, 00:07:58.035 "data_size": 0 00:07:58.035 } 00:07:58.035 ] 00:07:58.035 }' 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.035 04:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.604 [2024-11-21 04:54:15.075435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.604 [2024-11-21 04:54:15.075489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:58.604 [2024-11-21 04:54:15.075505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:58.604 [2024-11-21 04:54:15.075862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:58.604 [2024-11-21 04:54:15.076060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:58.604 [2024-11-21 04:54:15.076073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:58.604 [2024-11-21 04:54:15.076364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.604 BaseBdev3 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.604 [ 00:07:58.604 { 00:07:58.604 "name": "BaseBdev3", 00:07:58.604 "aliases": [ 00:07:58.604 "5a38006a-087e-40c9-bed9-9d0585ca3a6d" 00:07:58.604 ], 00:07:58.604 "product_name": "Malloc disk", 00:07:58.604 "block_size": 512, 00:07:58.604 "num_blocks": 65536, 00:07:58.604 "uuid": "5a38006a-087e-40c9-bed9-9d0585ca3a6d", 00:07:58.604 "assigned_rate_limits": { 00:07:58.604 "rw_ios_per_sec": 0, 00:07:58.604 "rw_mbytes_per_sec": 0, 00:07:58.604 "r_mbytes_per_sec": 0, 00:07:58.604 "w_mbytes_per_sec": 0 00:07:58.604 }, 00:07:58.604 "claimed": true, 00:07:58.604 "claim_type": "exclusive_write", 00:07:58.604 "zoned": false, 00:07:58.604 "supported_io_types": { 00:07:58.604 "read": true, 00:07:58.604 "write": true, 00:07:58.604 "unmap": true, 00:07:58.604 "flush": true, 00:07:58.604 "reset": true, 00:07:58.604 "nvme_admin": false, 00:07:58.604 "nvme_io": false, 00:07:58.604 "nvme_io_md": false, 00:07:58.604 "write_zeroes": true, 00:07:58.604 "zcopy": true, 00:07:58.604 "get_zone_info": false, 00:07:58.604 "zone_management": false, 00:07:58.604 "zone_append": false, 00:07:58.604 "compare": false, 00:07:58.604 "compare_and_write": false, 00:07:58.604 "abort": true, 00:07:58.604 "seek_hole": false, 00:07:58.604 "seek_data": false, 00:07:58.604 "copy": true, 00:07:58.604 "nvme_iov_md": false 00:07:58.604 }, 00:07:58.604 "memory_domains": [ 00:07:58.604 { 00:07:58.604 "dma_device_id": "system", 00:07:58.604 "dma_device_type": 1 00:07:58.604 }, 00:07:58.604 { 00:07:58.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.604 "dma_device_type": 2 00:07:58.604 } 00:07:58.604 ], 00:07:58.604 "driver_specific": {} 00:07:58.604 } 00:07:58.604 ] 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.604 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.605 "name": "Existed_Raid", 00:07:58.605 "uuid": "359a1291-529f-48fa-9cb1-92ba79e70b15", 00:07:58.605 "strip_size_kb": 64, 00:07:58.605 "state": "online", 00:07:58.605 "raid_level": "raid0", 00:07:58.605 "superblock": false, 00:07:58.605 "num_base_bdevs": 3, 00:07:58.605 "num_base_bdevs_discovered": 3, 00:07:58.605 "num_base_bdevs_operational": 3, 00:07:58.605 "base_bdevs_list": [ 00:07:58.605 { 00:07:58.605 "name": "BaseBdev1", 00:07:58.605 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:58.605 "is_configured": true, 00:07:58.605 "data_offset": 0, 00:07:58.605 "data_size": 65536 00:07:58.605 }, 00:07:58.605 { 00:07:58.605 "name": "BaseBdev2", 00:07:58.605 "uuid": "2caa279b-4146-46c2-8737-b14b52f606c8", 00:07:58.605 "is_configured": true, 00:07:58.605 "data_offset": 0, 00:07:58.605 "data_size": 65536 00:07:58.605 }, 00:07:58.605 { 00:07:58.605 "name": "BaseBdev3", 00:07:58.605 "uuid": "5a38006a-087e-40c9-bed9-9d0585ca3a6d", 00:07:58.605 "is_configured": true, 00:07:58.605 "data_offset": 0, 00:07:58.605 "data_size": 65536 00:07:58.605 } 00:07:58.605 ] 00:07:58.605 }' 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.605 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 [2024-11-21 04:54:15.539044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.865 "name": "Existed_Raid", 00:07:58.865 "aliases": [ 00:07:58.865 "359a1291-529f-48fa-9cb1-92ba79e70b15" 00:07:58.865 ], 00:07:58.865 "product_name": "Raid Volume", 00:07:58.865 "block_size": 512, 00:07:58.865 "num_blocks": 196608, 00:07:58.865 "uuid": "359a1291-529f-48fa-9cb1-92ba79e70b15", 00:07:58.865 "assigned_rate_limits": { 00:07:58.865 "rw_ios_per_sec": 0, 00:07:58.865 "rw_mbytes_per_sec": 0, 00:07:58.865 "r_mbytes_per_sec": 0, 00:07:58.865 "w_mbytes_per_sec": 0 00:07:58.865 }, 00:07:58.865 "claimed": false, 00:07:58.865 "zoned": false, 00:07:58.865 "supported_io_types": { 00:07:58.865 "read": true, 00:07:58.865 "write": true, 00:07:58.865 "unmap": true, 00:07:58.865 "flush": true, 00:07:58.865 "reset": true, 00:07:58.865 "nvme_admin": false, 00:07:58.865 "nvme_io": false, 00:07:58.865 "nvme_io_md": false, 00:07:58.865 "write_zeroes": true, 00:07:58.865 "zcopy": false, 00:07:58.865 "get_zone_info": false, 00:07:58.865 "zone_management": false, 00:07:58.865 "zone_append": false, 00:07:58.865 "compare": false, 00:07:58.865 "compare_and_write": false, 00:07:58.865 "abort": false, 00:07:58.865 "seek_hole": false, 00:07:58.865 "seek_data": false, 00:07:58.865 "copy": false, 00:07:58.865 "nvme_iov_md": false 00:07:58.865 }, 00:07:58.865 "memory_domains": [ 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 } 00:07:58.865 ], 00:07:58.865 "driver_specific": { 00:07:58.865 "raid": { 00:07:58.865 "uuid": "359a1291-529f-48fa-9cb1-92ba79e70b15", 00:07:58.865 "strip_size_kb": 64, 00:07:58.865 "state": "online", 00:07:58.865 "raid_level": "raid0", 00:07:58.865 "superblock": false, 00:07:58.865 "num_base_bdevs": 3, 00:07:58.865 "num_base_bdevs_discovered": 3, 00:07:58.865 "num_base_bdevs_operational": 3, 00:07:58.865 "base_bdevs_list": [ 00:07:58.865 { 00:07:58.865 "name": "BaseBdev1", 00:07:58.865 "uuid": "35b5f6ae-539b-461b-8ec8-d243e06a68c1", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 0, 00:07:58.865 "data_size": 65536 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "name": "BaseBdev2", 00:07:58.865 "uuid": "2caa279b-4146-46c2-8737-b14b52f606c8", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 0, 00:07:58.865 "data_size": 65536 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "name": "BaseBdev3", 00:07:58.865 "uuid": "5a38006a-087e-40c9-bed9-9d0585ca3a6d", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 0, 00:07:58.865 "data_size": 65536 00:07:58.865 } 00:07:58.865 ] 00:07:58.865 } 00:07:58.865 } 00:07:58.865 }' 00:07:58.865 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.125 BaseBdev2 00:07:59.125 BaseBdev3' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 [2024-11-21 04:54:15.825214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.125 [2024-11-21 04:54:15.825309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.125 [2024-11-21 04:54:15.825425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.385 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.385 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.385 "name": "Existed_Raid", 00:07:59.385 "uuid": "359a1291-529f-48fa-9cb1-92ba79e70b15", 00:07:59.385 "strip_size_kb": 64, 00:07:59.385 "state": "offline", 00:07:59.385 "raid_level": "raid0", 00:07:59.385 "superblock": false, 00:07:59.385 "num_base_bdevs": 3, 00:07:59.385 "num_base_bdevs_discovered": 2, 00:07:59.385 "num_base_bdevs_operational": 2, 00:07:59.385 "base_bdevs_list": [ 00:07:59.385 { 00:07:59.385 "name": null, 00:07:59.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.385 "is_configured": false, 00:07:59.385 "data_offset": 0, 00:07:59.385 "data_size": 65536 00:07:59.385 }, 00:07:59.385 { 00:07:59.385 "name": "BaseBdev2", 00:07:59.385 "uuid": "2caa279b-4146-46c2-8737-b14b52f606c8", 00:07:59.385 "is_configured": true, 00:07:59.385 "data_offset": 0, 00:07:59.385 "data_size": 65536 00:07:59.385 }, 00:07:59.385 { 00:07:59.385 "name": "BaseBdev3", 00:07:59.385 "uuid": "5a38006a-087e-40c9-bed9-9d0585ca3a6d", 00:07:59.385 "is_configured": true, 00:07:59.385 "data_offset": 0, 00:07:59.385 "data_size": 65536 00:07:59.385 } 00:07:59.385 ] 00:07:59.385 }' 00:07:59.385 04:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.385 04:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 [2024-11-21 04:54:16.308937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.905 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.905 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.905 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:59.905 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.905 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.905 [2024-11-21 04:54:16.385862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:59.906 [2024-11-21 04:54:16.385942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 BaseBdev2 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 [ 00:07:59.906 { 00:07:59.906 "name": "BaseBdev2", 00:07:59.906 "aliases": [ 00:07:59.906 "a16afd1c-5787-49f5-8d1b-538a3729b9c6" 00:07:59.906 ], 00:07:59.906 "product_name": "Malloc disk", 00:07:59.906 "block_size": 512, 00:07:59.906 "num_blocks": 65536, 00:07:59.906 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:07:59.906 "assigned_rate_limits": { 00:07:59.906 "rw_ios_per_sec": 0, 00:07:59.906 "rw_mbytes_per_sec": 0, 00:07:59.906 "r_mbytes_per_sec": 0, 00:07:59.906 "w_mbytes_per_sec": 0 00:07:59.906 }, 00:07:59.906 "claimed": false, 00:07:59.906 "zoned": false, 00:07:59.906 "supported_io_types": { 00:07:59.906 "read": true, 00:07:59.906 "write": true, 00:07:59.906 "unmap": true, 00:07:59.906 "flush": true, 00:07:59.906 "reset": true, 00:07:59.906 "nvme_admin": false, 00:07:59.906 "nvme_io": false, 00:07:59.906 "nvme_io_md": false, 00:07:59.906 "write_zeroes": true, 00:07:59.906 "zcopy": true, 00:07:59.906 "get_zone_info": false, 00:07:59.906 "zone_management": false, 00:07:59.906 "zone_append": false, 00:07:59.906 "compare": false, 00:07:59.906 "compare_and_write": false, 00:07:59.906 "abort": true, 00:07:59.906 "seek_hole": false, 00:07:59.906 "seek_data": false, 00:07:59.906 "copy": true, 00:07:59.906 "nvme_iov_md": false 00:07:59.906 }, 00:07:59.906 "memory_domains": [ 00:07:59.906 { 00:07:59.906 "dma_device_id": "system", 00:07:59.906 "dma_device_type": 1 00:07:59.906 }, 00:07:59.906 { 00:07:59.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.906 "dma_device_type": 2 00:07:59.906 } 00:07:59.906 ], 00:07:59.906 "driver_specific": {} 00:07:59.906 } 00:07:59.906 ] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 BaseBdev3 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 [ 00:07:59.906 { 00:07:59.906 "name": "BaseBdev3", 00:07:59.906 "aliases": [ 00:07:59.906 "fbe1ef48-6182-4871-8669-f5972df3bbcb" 00:07:59.906 ], 00:07:59.906 "product_name": "Malloc disk", 00:07:59.906 "block_size": 512, 00:07:59.906 "num_blocks": 65536, 00:07:59.906 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:07:59.906 "assigned_rate_limits": { 00:07:59.906 "rw_ios_per_sec": 0, 00:07:59.906 "rw_mbytes_per_sec": 0, 00:07:59.906 "r_mbytes_per_sec": 0, 00:07:59.906 "w_mbytes_per_sec": 0 00:07:59.906 }, 00:07:59.906 "claimed": false, 00:07:59.906 "zoned": false, 00:07:59.906 "supported_io_types": { 00:07:59.906 "read": true, 00:07:59.906 "write": true, 00:07:59.906 "unmap": true, 00:07:59.906 "flush": true, 00:07:59.906 "reset": true, 00:07:59.906 "nvme_admin": false, 00:07:59.906 "nvme_io": false, 00:07:59.906 "nvme_io_md": false, 00:07:59.906 "write_zeroes": true, 00:07:59.906 "zcopy": true, 00:07:59.906 "get_zone_info": false, 00:07:59.906 "zone_management": false, 00:07:59.906 "zone_append": false, 00:07:59.906 "compare": false, 00:07:59.906 "compare_and_write": false, 00:07:59.906 "abort": true, 00:07:59.906 "seek_hole": false, 00:07:59.906 "seek_data": false, 00:07:59.906 "copy": true, 00:07:59.906 "nvme_iov_md": false 00:07:59.906 }, 00:07:59.906 "memory_domains": [ 00:07:59.906 { 00:07:59.906 "dma_device_id": "system", 00:07:59.906 "dma_device_type": 1 00:07:59.906 }, 00:07:59.906 { 00:07:59.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.906 "dma_device_type": 2 00:07:59.906 } 00:07:59.906 ], 00:07:59.906 "driver_specific": {} 00:07:59.906 } 00:07:59.906 ] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 [2024-11-21 04:54:16.584992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.906 [2024-11-21 04:54:16.585161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.906 [2024-11-21 04:54:16.585215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.906 [2024-11-21 04:54:16.587431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.906 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.907 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.167 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.167 "name": "Existed_Raid", 00:08:00.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.167 "strip_size_kb": 64, 00:08:00.167 "state": "configuring", 00:08:00.167 "raid_level": "raid0", 00:08:00.167 "superblock": false, 00:08:00.167 "num_base_bdevs": 3, 00:08:00.167 "num_base_bdevs_discovered": 2, 00:08:00.167 "num_base_bdevs_operational": 3, 00:08:00.167 "base_bdevs_list": [ 00:08:00.167 { 00:08:00.167 "name": "BaseBdev1", 00:08:00.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.167 "is_configured": false, 00:08:00.167 "data_offset": 0, 00:08:00.167 "data_size": 0 00:08:00.167 }, 00:08:00.167 { 00:08:00.167 "name": "BaseBdev2", 00:08:00.167 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:00.167 "is_configured": true, 00:08:00.167 "data_offset": 0, 00:08:00.167 "data_size": 65536 00:08:00.167 }, 00:08:00.167 { 00:08:00.167 "name": "BaseBdev3", 00:08:00.167 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:00.167 "is_configured": true, 00:08:00.167 "data_offset": 0, 00:08:00.167 "data_size": 65536 00:08:00.167 } 00:08:00.167 ] 00:08:00.167 }' 00:08:00.167 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.167 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 04:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:00.426 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.426 04:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 [2024-11-21 04:54:17.000377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.426 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.426 "name": "Existed_Raid", 00:08:00.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.426 "strip_size_kb": 64, 00:08:00.426 "state": "configuring", 00:08:00.426 "raid_level": "raid0", 00:08:00.426 "superblock": false, 00:08:00.426 "num_base_bdevs": 3, 00:08:00.426 "num_base_bdevs_discovered": 1, 00:08:00.426 "num_base_bdevs_operational": 3, 00:08:00.426 "base_bdevs_list": [ 00:08:00.426 { 00:08:00.426 "name": "BaseBdev1", 00:08:00.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.426 "is_configured": false, 00:08:00.426 "data_offset": 0, 00:08:00.426 "data_size": 0 00:08:00.426 }, 00:08:00.426 { 00:08:00.426 "name": null, 00:08:00.426 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:00.426 "is_configured": false, 00:08:00.426 "data_offset": 0, 00:08:00.427 "data_size": 65536 00:08:00.427 }, 00:08:00.427 { 00:08:00.427 "name": "BaseBdev3", 00:08:00.427 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:00.427 "is_configured": true, 00:08:00.427 "data_offset": 0, 00:08:00.427 "data_size": 65536 00:08:00.427 } 00:08:00.427 ] 00:08:00.427 }' 00:08:00.427 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.427 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 [2024-11-21 04:54:17.516189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.996 BaseBdev1 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 [ 00:08:00.996 { 00:08:00.996 "name": "BaseBdev1", 00:08:00.996 "aliases": [ 00:08:00.996 "62269cdf-47b5-48cf-bf0c-a45d906f7147" 00:08:00.996 ], 00:08:00.996 "product_name": "Malloc disk", 00:08:00.996 "block_size": 512, 00:08:00.996 "num_blocks": 65536, 00:08:00.996 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:00.996 "assigned_rate_limits": { 00:08:00.996 "rw_ios_per_sec": 0, 00:08:00.996 "rw_mbytes_per_sec": 0, 00:08:00.996 "r_mbytes_per_sec": 0, 00:08:00.996 "w_mbytes_per_sec": 0 00:08:00.996 }, 00:08:00.996 "claimed": true, 00:08:00.996 "claim_type": "exclusive_write", 00:08:00.996 "zoned": false, 00:08:00.996 "supported_io_types": { 00:08:00.996 "read": true, 00:08:00.996 "write": true, 00:08:00.996 "unmap": true, 00:08:00.996 "flush": true, 00:08:00.996 "reset": true, 00:08:00.996 "nvme_admin": false, 00:08:00.996 "nvme_io": false, 00:08:00.996 "nvme_io_md": false, 00:08:00.996 "write_zeroes": true, 00:08:00.996 "zcopy": true, 00:08:00.996 "get_zone_info": false, 00:08:00.996 "zone_management": false, 00:08:00.996 "zone_append": false, 00:08:00.996 "compare": false, 00:08:00.996 "compare_and_write": false, 00:08:00.996 "abort": true, 00:08:00.996 "seek_hole": false, 00:08:00.996 "seek_data": false, 00:08:00.996 "copy": true, 00:08:00.996 "nvme_iov_md": false 00:08:00.996 }, 00:08:00.996 "memory_domains": [ 00:08:00.996 { 00:08:00.996 "dma_device_id": "system", 00:08:00.996 "dma_device_type": 1 00:08:00.996 }, 00:08:00.996 { 00:08:00.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.996 "dma_device_type": 2 00:08:00.996 } 00:08:00.996 ], 00:08:00.996 "driver_specific": {} 00:08:00.996 } 00:08:00.996 ] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.996 "name": "Existed_Raid", 00:08:00.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.996 "strip_size_kb": 64, 00:08:00.996 "state": "configuring", 00:08:00.996 "raid_level": "raid0", 00:08:00.996 "superblock": false, 00:08:00.996 "num_base_bdevs": 3, 00:08:00.996 "num_base_bdevs_discovered": 2, 00:08:00.996 "num_base_bdevs_operational": 3, 00:08:00.996 "base_bdevs_list": [ 00:08:00.996 { 00:08:00.996 "name": "BaseBdev1", 00:08:00.996 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:00.996 "is_configured": true, 00:08:00.996 "data_offset": 0, 00:08:00.996 "data_size": 65536 00:08:00.996 }, 00:08:00.996 { 00:08:00.996 "name": null, 00:08:00.996 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:00.996 "is_configured": false, 00:08:00.996 "data_offset": 0, 00:08:00.996 "data_size": 65536 00:08:00.996 }, 00:08:00.996 { 00:08:00.996 "name": "BaseBdev3", 00:08:00.996 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:00.996 "is_configured": true, 00:08:00.996 "data_offset": 0, 00:08:00.996 "data_size": 65536 00:08:00.996 } 00:08:00.996 ] 00:08:00.996 }' 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.996 04:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.566 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.566 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 [2024-11-21 04:54:18.039439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.567 "name": "Existed_Raid", 00:08:01.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.567 "strip_size_kb": 64, 00:08:01.567 "state": "configuring", 00:08:01.567 "raid_level": "raid0", 00:08:01.567 "superblock": false, 00:08:01.567 "num_base_bdevs": 3, 00:08:01.567 "num_base_bdevs_discovered": 1, 00:08:01.567 "num_base_bdevs_operational": 3, 00:08:01.567 "base_bdevs_list": [ 00:08:01.567 { 00:08:01.567 "name": "BaseBdev1", 00:08:01.567 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:01.567 "is_configured": true, 00:08:01.567 "data_offset": 0, 00:08:01.567 "data_size": 65536 00:08:01.567 }, 00:08:01.567 { 00:08:01.567 "name": null, 00:08:01.567 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:01.567 "is_configured": false, 00:08:01.567 "data_offset": 0, 00:08:01.567 "data_size": 65536 00:08:01.567 }, 00:08:01.567 { 00:08:01.567 "name": null, 00:08:01.567 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:01.567 "is_configured": false, 00:08:01.567 "data_offset": 0, 00:08:01.567 "data_size": 65536 00:08:01.567 } 00:08:01.567 ] 00:08:01.567 }' 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.567 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.826 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.826 [2024-11-21 04:54:18.522863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.827 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.086 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.086 "name": "Existed_Raid", 00:08:02.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.086 "strip_size_kb": 64, 00:08:02.086 "state": "configuring", 00:08:02.086 "raid_level": "raid0", 00:08:02.086 "superblock": false, 00:08:02.086 "num_base_bdevs": 3, 00:08:02.086 "num_base_bdevs_discovered": 2, 00:08:02.086 "num_base_bdevs_operational": 3, 00:08:02.086 "base_bdevs_list": [ 00:08:02.086 { 00:08:02.086 "name": "BaseBdev1", 00:08:02.086 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:02.086 "is_configured": true, 00:08:02.086 "data_offset": 0, 00:08:02.086 "data_size": 65536 00:08:02.086 }, 00:08:02.086 { 00:08:02.086 "name": null, 00:08:02.086 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:02.086 "is_configured": false, 00:08:02.086 "data_offset": 0, 00:08:02.086 "data_size": 65536 00:08:02.086 }, 00:08:02.086 { 00:08:02.086 "name": "BaseBdev3", 00:08:02.086 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:02.086 "is_configured": true, 00:08:02.086 "data_offset": 0, 00:08:02.086 "data_size": 65536 00:08:02.086 } 00:08:02.086 ] 00:08:02.086 }' 00:08:02.086 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.086 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.348 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.348 04:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.348 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.348 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.348 04:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.348 [2024-11-21 04:54:19.014114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.348 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.611 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.612 "name": "Existed_Raid", 00:08:02.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.612 "strip_size_kb": 64, 00:08:02.612 "state": "configuring", 00:08:02.612 "raid_level": "raid0", 00:08:02.612 "superblock": false, 00:08:02.612 "num_base_bdevs": 3, 00:08:02.612 "num_base_bdevs_discovered": 1, 00:08:02.612 "num_base_bdevs_operational": 3, 00:08:02.612 "base_bdevs_list": [ 00:08:02.612 { 00:08:02.612 "name": null, 00:08:02.612 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:02.612 "is_configured": false, 00:08:02.612 "data_offset": 0, 00:08:02.612 "data_size": 65536 00:08:02.612 }, 00:08:02.612 { 00:08:02.612 "name": null, 00:08:02.612 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:02.612 "is_configured": false, 00:08:02.612 "data_offset": 0, 00:08:02.612 "data_size": 65536 00:08:02.612 }, 00:08:02.612 { 00:08:02.612 "name": "BaseBdev3", 00:08:02.612 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:02.612 "is_configured": true, 00:08:02.612 "data_offset": 0, 00:08:02.612 "data_size": 65536 00:08:02.612 } 00:08:02.612 ] 00:08:02.612 }' 00:08:02.612 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.612 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.871 [2024-11-21 04:54:19.545788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.871 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.130 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.130 "name": "Existed_Raid", 00:08:03.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.130 "strip_size_kb": 64, 00:08:03.130 "state": "configuring", 00:08:03.130 "raid_level": "raid0", 00:08:03.130 "superblock": false, 00:08:03.130 "num_base_bdevs": 3, 00:08:03.130 "num_base_bdevs_discovered": 2, 00:08:03.130 "num_base_bdevs_operational": 3, 00:08:03.130 "base_bdevs_list": [ 00:08:03.130 { 00:08:03.130 "name": null, 00:08:03.130 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:03.130 "is_configured": false, 00:08:03.130 "data_offset": 0, 00:08:03.130 "data_size": 65536 00:08:03.130 }, 00:08:03.130 { 00:08:03.130 "name": "BaseBdev2", 00:08:03.130 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:03.130 "is_configured": true, 00:08:03.130 "data_offset": 0, 00:08:03.130 "data_size": 65536 00:08:03.130 }, 00:08:03.130 { 00:08:03.130 "name": "BaseBdev3", 00:08:03.130 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:03.130 "is_configured": true, 00:08:03.130 "data_offset": 0, 00:08:03.130 "data_size": 65536 00:08:03.130 } 00:08:03.130 ] 00:08:03.130 }' 00:08:03.130 04:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.130 04:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.389 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62269cdf-47b5-48cf-bf0c-a45d906f7147 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.390 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.649 [2024-11-21 04:54:20.125678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:03.649 [2024-11-21 04:54:20.125730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:03.649 [2024-11-21 04:54:20.125741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.649 [2024-11-21 04:54:20.126077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:03.649 [2024-11-21 04:54:20.126241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:03.649 [2024-11-21 04:54:20.126257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:03.649 [2024-11-21 04:54:20.126489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.649 NewBaseBdev 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.649 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.649 [ 00:08:03.649 { 00:08:03.649 "name": "NewBaseBdev", 00:08:03.649 "aliases": [ 00:08:03.649 "62269cdf-47b5-48cf-bf0c-a45d906f7147" 00:08:03.649 ], 00:08:03.649 "product_name": "Malloc disk", 00:08:03.649 "block_size": 512, 00:08:03.649 "num_blocks": 65536, 00:08:03.649 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:03.649 "assigned_rate_limits": { 00:08:03.649 "rw_ios_per_sec": 0, 00:08:03.649 "rw_mbytes_per_sec": 0, 00:08:03.649 "r_mbytes_per_sec": 0, 00:08:03.649 "w_mbytes_per_sec": 0 00:08:03.649 }, 00:08:03.649 "claimed": true, 00:08:03.649 "claim_type": "exclusive_write", 00:08:03.649 "zoned": false, 00:08:03.649 "supported_io_types": { 00:08:03.649 "read": true, 00:08:03.649 "write": true, 00:08:03.649 "unmap": true, 00:08:03.649 "flush": true, 00:08:03.649 "reset": true, 00:08:03.649 "nvme_admin": false, 00:08:03.649 "nvme_io": false, 00:08:03.649 "nvme_io_md": false, 00:08:03.649 "write_zeroes": true, 00:08:03.649 "zcopy": true, 00:08:03.649 "get_zone_info": false, 00:08:03.649 "zone_management": false, 00:08:03.649 "zone_append": false, 00:08:03.649 "compare": false, 00:08:03.650 "compare_and_write": false, 00:08:03.650 "abort": true, 00:08:03.650 "seek_hole": false, 00:08:03.650 "seek_data": false, 00:08:03.650 "copy": true, 00:08:03.650 "nvme_iov_md": false 00:08:03.650 }, 00:08:03.650 "memory_domains": [ 00:08:03.650 { 00:08:03.650 "dma_device_id": "system", 00:08:03.650 "dma_device_type": 1 00:08:03.650 }, 00:08:03.650 { 00:08:03.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.650 "dma_device_type": 2 00:08:03.650 } 00:08:03.650 ], 00:08:03.650 "driver_specific": {} 00:08:03.650 } 00:08:03.650 ] 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.650 "name": "Existed_Raid", 00:08:03.650 "uuid": "e79e31ae-21fa-4acb-a1c1-8515a48ff770", 00:08:03.650 "strip_size_kb": 64, 00:08:03.650 "state": "online", 00:08:03.650 "raid_level": "raid0", 00:08:03.650 "superblock": false, 00:08:03.650 "num_base_bdevs": 3, 00:08:03.650 "num_base_bdevs_discovered": 3, 00:08:03.650 "num_base_bdevs_operational": 3, 00:08:03.650 "base_bdevs_list": [ 00:08:03.650 { 00:08:03.650 "name": "NewBaseBdev", 00:08:03.650 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:03.650 "is_configured": true, 00:08:03.650 "data_offset": 0, 00:08:03.650 "data_size": 65536 00:08:03.650 }, 00:08:03.650 { 00:08:03.650 "name": "BaseBdev2", 00:08:03.650 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:03.650 "is_configured": true, 00:08:03.650 "data_offset": 0, 00:08:03.650 "data_size": 65536 00:08:03.650 }, 00:08:03.650 { 00:08:03.650 "name": "BaseBdev3", 00:08:03.650 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:03.650 "is_configured": true, 00:08:03.650 "data_offset": 0, 00:08:03.650 "data_size": 65536 00:08:03.650 } 00:08:03.650 ] 00:08:03.650 }' 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.650 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.910 [2024-11-21 04:54:20.601373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.910 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.910 "name": "Existed_Raid", 00:08:03.910 "aliases": [ 00:08:03.910 "e79e31ae-21fa-4acb-a1c1-8515a48ff770" 00:08:03.910 ], 00:08:03.910 "product_name": "Raid Volume", 00:08:03.910 "block_size": 512, 00:08:03.910 "num_blocks": 196608, 00:08:03.910 "uuid": "e79e31ae-21fa-4acb-a1c1-8515a48ff770", 00:08:03.910 "assigned_rate_limits": { 00:08:03.910 "rw_ios_per_sec": 0, 00:08:03.910 "rw_mbytes_per_sec": 0, 00:08:03.910 "r_mbytes_per_sec": 0, 00:08:03.910 "w_mbytes_per_sec": 0 00:08:03.910 }, 00:08:03.910 "claimed": false, 00:08:03.910 "zoned": false, 00:08:03.910 "supported_io_types": { 00:08:03.910 "read": true, 00:08:03.910 "write": true, 00:08:03.910 "unmap": true, 00:08:03.910 "flush": true, 00:08:03.910 "reset": true, 00:08:03.910 "nvme_admin": false, 00:08:03.910 "nvme_io": false, 00:08:03.910 "nvme_io_md": false, 00:08:03.910 "write_zeroes": true, 00:08:03.910 "zcopy": false, 00:08:03.910 "get_zone_info": false, 00:08:03.910 "zone_management": false, 00:08:03.910 "zone_append": false, 00:08:03.910 "compare": false, 00:08:03.910 "compare_and_write": false, 00:08:03.910 "abort": false, 00:08:03.910 "seek_hole": false, 00:08:03.910 "seek_data": false, 00:08:03.910 "copy": false, 00:08:03.910 "nvme_iov_md": false 00:08:03.910 }, 00:08:03.910 "memory_domains": [ 00:08:03.910 { 00:08:03.910 "dma_device_id": "system", 00:08:03.910 "dma_device_type": 1 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.910 "dma_device_type": 2 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "dma_device_id": "system", 00:08:03.910 "dma_device_type": 1 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.910 "dma_device_type": 2 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "dma_device_id": "system", 00:08:03.910 "dma_device_type": 1 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.910 "dma_device_type": 2 00:08:03.910 } 00:08:03.910 ], 00:08:03.910 "driver_specific": { 00:08:03.910 "raid": { 00:08:03.910 "uuid": "e79e31ae-21fa-4acb-a1c1-8515a48ff770", 00:08:03.910 "strip_size_kb": 64, 00:08:03.910 "state": "online", 00:08:03.910 "raid_level": "raid0", 00:08:03.910 "superblock": false, 00:08:03.910 "num_base_bdevs": 3, 00:08:03.910 "num_base_bdevs_discovered": 3, 00:08:03.910 "num_base_bdevs_operational": 3, 00:08:03.910 "base_bdevs_list": [ 00:08:03.910 { 00:08:03.910 "name": "NewBaseBdev", 00:08:03.910 "uuid": "62269cdf-47b5-48cf-bf0c-a45d906f7147", 00:08:03.910 "is_configured": true, 00:08:03.910 "data_offset": 0, 00:08:03.910 "data_size": 65536 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "name": "BaseBdev2", 00:08:03.910 "uuid": "a16afd1c-5787-49f5-8d1b-538a3729b9c6", 00:08:03.910 "is_configured": true, 00:08:03.910 "data_offset": 0, 00:08:03.910 "data_size": 65536 00:08:03.910 }, 00:08:03.910 { 00:08:03.910 "name": "BaseBdev3", 00:08:03.910 "uuid": "fbe1ef48-6182-4871-8669-f5972df3bbcb", 00:08:03.910 "is_configured": true, 00:08:03.910 "data_offset": 0, 00:08:03.910 "data_size": 65536 00:08:03.910 } 00:08:03.910 ] 00:08:03.910 } 00:08:03.910 } 00:08:03.910 }' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:04.170 BaseBdev2 00:08:04.170 BaseBdev3' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.170 [2024-11-21 04:54:20.864533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.170 [2024-11-21 04:54:20.864576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.170 [2024-11-21 04:54:20.864684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.170 [2024-11-21 04:54:20.864759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.170 [2024-11-21 04:54:20.864778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75188 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75188 ']' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75188 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.170 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75188 00:08:04.429 killing process with pid 75188 00:08:04.429 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.429 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.429 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75188' 00:08:04.429 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75188 00:08:04.429 [2024-11-21 04:54:20.905059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.429 04:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75188 00:08:04.429 [2024-11-21 04:54:20.964900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.688 00:08:04.688 real 0m8.955s 00:08:04.688 user 0m15.036s 00:08:04.688 sys 0m1.875s 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.688 ************************************ 00:08:04.688 END TEST raid_state_function_test 00:08:04.688 ************************************ 00:08:04.688 04:54:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:04.688 04:54:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.688 04:54:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.688 04:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.688 ************************************ 00:08:04.688 START TEST raid_state_function_test_sb 00:08:04.688 ************************************ 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75792 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.688 Process raid pid: 75792 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75792' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75792 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75792 ']' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.688 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.689 04:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.948 [2024-11-21 04:54:21.449329] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:04.948 [2024-11-21 04:54:21.449449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.948 [2024-11-21 04:54:21.622224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.948 [2024-11-21 04:54:21.667401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.207 [2024-11-21 04:54:21.743976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.207 [2024-11-21 04:54:21.744024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.775 [2024-11-21 04:54:22.280555] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.775 [2024-11-21 04:54:22.280635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.775 [2024-11-21 04:54:22.280654] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.775 [2024-11-21 04:54:22.280666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.775 [2024-11-21 04:54:22.280675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.775 [2024-11-21 04:54:22.280688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.775 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.776 "name": "Existed_Raid", 00:08:05.776 "uuid": "bb7fec43-96db-4fe9-9128-688d58d6720a", 00:08:05.776 "strip_size_kb": 64, 00:08:05.776 "state": "configuring", 00:08:05.776 "raid_level": "raid0", 00:08:05.776 "superblock": true, 00:08:05.776 "num_base_bdevs": 3, 00:08:05.776 "num_base_bdevs_discovered": 0, 00:08:05.776 "num_base_bdevs_operational": 3, 00:08:05.776 "base_bdevs_list": [ 00:08:05.776 { 00:08:05.776 "name": "BaseBdev1", 00:08:05.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.776 "is_configured": false, 00:08:05.776 "data_offset": 0, 00:08:05.776 "data_size": 0 00:08:05.776 }, 00:08:05.776 { 00:08:05.776 "name": "BaseBdev2", 00:08:05.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.776 "is_configured": false, 00:08:05.776 "data_offset": 0, 00:08:05.776 "data_size": 0 00:08:05.776 }, 00:08:05.776 { 00:08:05.776 "name": "BaseBdev3", 00:08:05.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.776 "is_configured": false, 00:08:05.776 "data_offset": 0, 00:08:05.776 "data_size": 0 00:08:05.776 } 00:08:05.776 ] 00:08:05.776 }' 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.776 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.035 [2024-11-21 04:54:22.731634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.035 [2024-11-21 04:54:22.731689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.035 [2024-11-21 04:54:22.743605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.035 [2024-11-21 04:54:22.743646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.035 [2024-11-21 04:54:22.743655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.035 [2024-11-21 04:54:22.743665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.035 [2024-11-21 04:54:22.743671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.035 [2024-11-21 04:54:22.743681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.035 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.295 [2024-11-21 04:54:22.770425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.295 BaseBdev1 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.295 [ 00:08:06.295 { 00:08:06.295 "name": "BaseBdev1", 00:08:06.295 "aliases": [ 00:08:06.295 "7bfc1444-2350-4350-8d09-465a5be281c9" 00:08:06.295 ], 00:08:06.295 "product_name": "Malloc disk", 00:08:06.295 "block_size": 512, 00:08:06.295 "num_blocks": 65536, 00:08:06.295 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:06.295 "assigned_rate_limits": { 00:08:06.295 "rw_ios_per_sec": 0, 00:08:06.295 "rw_mbytes_per_sec": 0, 00:08:06.295 "r_mbytes_per_sec": 0, 00:08:06.295 "w_mbytes_per_sec": 0 00:08:06.295 }, 00:08:06.295 "claimed": true, 00:08:06.295 "claim_type": "exclusive_write", 00:08:06.295 "zoned": false, 00:08:06.295 "supported_io_types": { 00:08:06.295 "read": true, 00:08:06.295 "write": true, 00:08:06.295 "unmap": true, 00:08:06.295 "flush": true, 00:08:06.295 "reset": true, 00:08:06.295 "nvme_admin": false, 00:08:06.295 "nvme_io": false, 00:08:06.295 "nvme_io_md": false, 00:08:06.295 "write_zeroes": true, 00:08:06.295 "zcopy": true, 00:08:06.295 "get_zone_info": false, 00:08:06.295 "zone_management": false, 00:08:06.295 "zone_append": false, 00:08:06.295 "compare": false, 00:08:06.295 "compare_and_write": false, 00:08:06.295 "abort": true, 00:08:06.295 "seek_hole": false, 00:08:06.295 "seek_data": false, 00:08:06.295 "copy": true, 00:08:06.295 "nvme_iov_md": false 00:08:06.295 }, 00:08:06.295 "memory_domains": [ 00:08:06.295 { 00:08:06.295 "dma_device_id": "system", 00:08:06.295 "dma_device_type": 1 00:08:06.295 }, 00:08:06.295 { 00:08:06.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.295 "dma_device_type": 2 00:08:06.295 } 00:08:06.295 ], 00:08:06.295 "driver_specific": {} 00:08:06.295 } 00:08:06.295 ] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.295 "name": "Existed_Raid", 00:08:06.295 "uuid": "705233ba-290d-4aa3-ada8-80f2f6a757b6", 00:08:06.295 "strip_size_kb": 64, 00:08:06.295 "state": "configuring", 00:08:06.295 "raid_level": "raid0", 00:08:06.295 "superblock": true, 00:08:06.295 "num_base_bdevs": 3, 00:08:06.295 "num_base_bdevs_discovered": 1, 00:08:06.295 "num_base_bdevs_operational": 3, 00:08:06.295 "base_bdevs_list": [ 00:08:06.295 { 00:08:06.295 "name": "BaseBdev1", 00:08:06.295 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:06.295 "is_configured": true, 00:08:06.295 "data_offset": 2048, 00:08:06.295 "data_size": 63488 00:08:06.295 }, 00:08:06.295 { 00:08:06.295 "name": "BaseBdev2", 00:08:06.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.295 "is_configured": false, 00:08:06.295 "data_offset": 0, 00:08:06.295 "data_size": 0 00:08:06.295 }, 00:08:06.295 { 00:08:06.295 "name": "BaseBdev3", 00:08:06.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.295 "is_configured": false, 00:08:06.295 "data_offset": 0, 00:08:06.295 "data_size": 0 00:08:06.295 } 00:08:06.295 ] 00:08:06.295 }' 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.295 04:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.554 [2024-11-21 04:54:23.237620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.554 [2024-11-21 04:54:23.237659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.554 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.554 [2024-11-21 04:54:23.245656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.555 [2024-11-21 04:54:23.247777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.555 [2024-11-21 04:54:23.247813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.555 [2024-11-21 04:54:23.247822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.555 [2024-11-21 04:54:23.247832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.555 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.814 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.814 "name": "Existed_Raid", 00:08:06.814 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:06.814 "strip_size_kb": 64, 00:08:06.814 "state": "configuring", 00:08:06.814 "raid_level": "raid0", 00:08:06.814 "superblock": true, 00:08:06.814 "num_base_bdevs": 3, 00:08:06.814 "num_base_bdevs_discovered": 1, 00:08:06.814 "num_base_bdevs_operational": 3, 00:08:06.814 "base_bdevs_list": [ 00:08:06.814 { 00:08:06.814 "name": "BaseBdev1", 00:08:06.814 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:06.814 "is_configured": true, 00:08:06.814 "data_offset": 2048, 00:08:06.814 "data_size": 63488 00:08:06.814 }, 00:08:06.814 { 00:08:06.814 "name": "BaseBdev2", 00:08:06.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.814 "is_configured": false, 00:08:06.814 "data_offset": 0, 00:08:06.814 "data_size": 0 00:08:06.814 }, 00:08:06.814 { 00:08:06.814 "name": "BaseBdev3", 00:08:06.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.814 "is_configured": false, 00:08:06.814 "data_offset": 0, 00:08:06.814 "data_size": 0 00:08:06.814 } 00:08:06.814 ] 00:08:06.814 }' 00:08:06.814 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.814 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.073 [2024-11-21 04:54:23.725751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.073 BaseBdev2 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.073 [ 00:08:07.073 { 00:08:07.073 "name": "BaseBdev2", 00:08:07.073 "aliases": [ 00:08:07.073 "8d0204be-2727-4cde-bb9a-4ceb3f7e2148" 00:08:07.073 ], 00:08:07.073 "product_name": "Malloc disk", 00:08:07.073 "block_size": 512, 00:08:07.073 "num_blocks": 65536, 00:08:07.073 "uuid": "8d0204be-2727-4cde-bb9a-4ceb3f7e2148", 00:08:07.073 "assigned_rate_limits": { 00:08:07.073 "rw_ios_per_sec": 0, 00:08:07.073 "rw_mbytes_per_sec": 0, 00:08:07.073 "r_mbytes_per_sec": 0, 00:08:07.073 "w_mbytes_per_sec": 0 00:08:07.073 }, 00:08:07.073 "claimed": true, 00:08:07.073 "claim_type": "exclusive_write", 00:08:07.073 "zoned": false, 00:08:07.073 "supported_io_types": { 00:08:07.073 "read": true, 00:08:07.073 "write": true, 00:08:07.073 "unmap": true, 00:08:07.073 "flush": true, 00:08:07.073 "reset": true, 00:08:07.073 "nvme_admin": false, 00:08:07.073 "nvme_io": false, 00:08:07.073 "nvme_io_md": false, 00:08:07.073 "write_zeroes": true, 00:08:07.073 "zcopy": true, 00:08:07.073 "get_zone_info": false, 00:08:07.073 "zone_management": false, 00:08:07.073 "zone_append": false, 00:08:07.073 "compare": false, 00:08:07.073 "compare_and_write": false, 00:08:07.073 "abort": true, 00:08:07.073 "seek_hole": false, 00:08:07.073 "seek_data": false, 00:08:07.073 "copy": true, 00:08:07.073 "nvme_iov_md": false 00:08:07.073 }, 00:08:07.073 "memory_domains": [ 00:08:07.073 { 00:08:07.073 "dma_device_id": "system", 00:08:07.073 "dma_device_type": 1 00:08:07.073 }, 00:08:07.073 { 00:08:07.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.073 "dma_device_type": 2 00:08:07.073 } 00:08:07.073 ], 00:08:07.073 "driver_specific": {} 00:08:07.073 } 00:08:07.073 ] 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.073 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.332 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.332 "name": "Existed_Raid", 00:08:07.332 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:07.332 "strip_size_kb": 64, 00:08:07.332 "state": "configuring", 00:08:07.332 "raid_level": "raid0", 00:08:07.332 "superblock": true, 00:08:07.332 "num_base_bdevs": 3, 00:08:07.332 "num_base_bdevs_discovered": 2, 00:08:07.332 "num_base_bdevs_operational": 3, 00:08:07.332 "base_bdevs_list": [ 00:08:07.332 { 00:08:07.332 "name": "BaseBdev1", 00:08:07.332 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:07.332 "is_configured": true, 00:08:07.332 "data_offset": 2048, 00:08:07.332 "data_size": 63488 00:08:07.332 }, 00:08:07.332 { 00:08:07.332 "name": "BaseBdev2", 00:08:07.332 "uuid": "8d0204be-2727-4cde-bb9a-4ceb3f7e2148", 00:08:07.332 "is_configured": true, 00:08:07.332 "data_offset": 2048, 00:08:07.332 "data_size": 63488 00:08:07.332 }, 00:08:07.332 { 00:08:07.332 "name": "BaseBdev3", 00:08:07.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.332 "is_configured": false, 00:08:07.332 "data_offset": 0, 00:08:07.332 "data_size": 0 00:08:07.332 } 00:08:07.332 ] 00:08:07.332 }' 00:08:07.332 04:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.332 04:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.592 [2024-11-21 04:54:24.179365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.592 [2024-11-21 04:54:24.179594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:07.592 [2024-11-21 04:54:24.179628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.592 BaseBdev3 00:08:07.592 [2024-11-21 04:54:24.180011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:07.592 [2024-11-21 04:54:24.180257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:07.592 [2024-11-21 04:54:24.180278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:07.592 [2024-11-21 04:54:24.180455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.592 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.592 [ 00:08:07.592 { 00:08:07.592 "name": "BaseBdev3", 00:08:07.592 "aliases": [ 00:08:07.592 "211f86b6-dd65-495d-b4a7-6736bac55842" 00:08:07.592 ], 00:08:07.592 "product_name": "Malloc disk", 00:08:07.592 "block_size": 512, 00:08:07.592 "num_blocks": 65536, 00:08:07.592 "uuid": "211f86b6-dd65-495d-b4a7-6736bac55842", 00:08:07.592 "assigned_rate_limits": { 00:08:07.592 "rw_ios_per_sec": 0, 00:08:07.592 "rw_mbytes_per_sec": 0, 00:08:07.592 "r_mbytes_per_sec": 0, 00:08:07.592 "w_mbytes_per_sec": 0 00:08:07.592 }, 00:08:07.592 "claimed": true, 00:08:07.592 "claim_type": "exclusive_write", 00:08:07.592 "zoned": false, 00:08:07.592 "supported_io_types": { 00:08:07.592 "read": true, 00:08:07.592 "write": true, 00:08:07.592 "unmap": true, 00:08:07.592 "flush": true, 00:08:07.592 "reset": true, 00:08:07.592 "nvme_admin": false, 00:08:07.592 "nvme_io": false, 00:08:07.592 "nvme_io_md": false, 00:08:07.592 "write_zeroes": true, 00:08:07.592 "zcopy": true, 00:08:07.592 "get_zone_info": false, 00:08:07.592 "zone_management": false, 00:08:07.592 "zone_append": false, 00:08:07.592 "compare": false, 00:08:07.592 "compare_and_write": false, 00:08:07.592 "abort": true, 00:08:07.592 "seek_hole": false, 00:08:07.592 "seek_data": false, 00:08:07.592 "copy": true, 00:08:07.592 "nvme_iov_md": false 00:08:07.592 }, 00:08:07.592 "memory_domains": [ 00:08:07.592 { 00:08:07.592 "dma_device_id": "system", 00:08:07.592 "dma_device_type": 1 00:08:07.592 }, 00:08:07.592 { 00:08:07.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.593 "dma_device_type": 2 00:08:07.593 } 00:08:07.593 ], 00:08:07.593 "driver_specific": {} 00:08:07.593 } 00:08:07.593 ] 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.593 "name": "Existed_Raid", 00:08:07.593 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:07.593 "strip_size_kb": 64, 00:08:07.593 "state": "online", 00:08:07.593 "raid_level": "raid0", 00:08:07.593 "superblock": true, 00:08:07.593 "num_base_bdevs": 3, 00:08:07.593 "num_base_bdevs_discovered": 3, 00:08:07.593 "num_base_bdevs_operational": 3, 00:08:07.593 "base_bdevs_list": [ 00:08:07.593 { 00:08:07.593 "name": "BaseBdev1", 00:08:07.593 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:07.593 "is_configured": true, 00:08:07.593 "data_offset": 2048, 00:08:07.593 "data_size": 63488 00:08:07.593 }, 00:08:07.593 { 00:08:07.593 "name": "BaseBdev2", 00:08:07.593 "uuid": "8d0204be-2727-4cde-bb9a-4ceb3f7e2148", 00:08:07.593 "is_configured": true, 00:08:07.593 "data_offset": 2048, 00:08:07.593 "data_size": 63488 00:08:07.593 }, 00:08:07.593 { 00:08:07.593 "name": "BaseBdev3", 00:08:07.593 "uuid": "211f86b6-dd65-495d-b4a7-6736bac55842", 00:08:07.593 "is_configured": true, 00:08:07.593 "data_offset": 2048, 00:08:07.593 "data_size": 63488 00:08:07.593 } 00:08:07.593 ] 00:08:07.593 }' 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.593 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.162 [2024-11-21 04:54:24.642946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.162 "name": "Existed_Raid", 00:08:08.162 "aliases": [ 00:08:08.162 "377f1c2f-589a-4fcc-a327-3f8f010bc9b1" 00:08:08.162 ], 00:08:08.162 "product_name": "Raid Volume", 00:08:08.162 "block_size": 512, 00:08:08.162 "num_blocks": 190464, 00:08:08.162 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:08.162 "assigned_rate_limits": { 00:08:08.162 "rw_ios_per_sec": 0, 00:08:08.162 "rw_mbytes_per_sec": 0, 00:08:08.162 "r_mbytes_per_sec": 0, 00:08:08.162 "w_mbytes_per_sec": 0 00:08:08.162 }, 00:08:08.162 "claimed": false, 00:08:08.162 "zoned": false, 00:08:08.162 "supported_io_types": { 00:08:08.162 "read": true, 00:08:08.162 "write": true, 00:08:08.162 "unmap": true, 00:08:08.162 "flush": true, 00:08:08.162 "reset": true, 00:08:08.162 "nvme_admin": false, 00:08:08.162 "nvme_io": false, 00:08:08.162 "nvme_io_md": false, 00:08:08.162 "write_zeroes": true, 00:08:08.162 "zcopy": false, 00:08:08.162 "get_zone_info": false, 00:08:08.162 "zone_management": false, 00:08:08.162 "zone_append": false, 00:08:08.162 "compare": false, 00:08:08.162 "compare_and_write": false, 00:08:08.162 "abort": false, 00:08:08.162 "seek_hole": false, 00:08:08.162 "seek_data": false, 00:08:08.162 "copy": false, 00:08:08.162 "nvme_iov_md": false 00:08:08.162 }, 00:08:08.162 "memory_domains": [ 00:08:08.162 { 00:08:08.162 "dma_device_id": "system", 00:08:08.162 "dma_device_type": 1 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.162 "dma_device_type": 2 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "dma_device_id": "system", 00:08:08.162 "dma_device_type": 1 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.162 "dma_device_type": 2 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "dma_device_id": "system", 00:08:08.162 "dma_device_type": 1 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.162 "dma_device_type": 2 00:08:08.162 } 00:08:08.162 ], 00:08:08.162 "driver_specific": { 00:08:08.162 "raid": { 00:08:08.162 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:08.162 "strip_size_kb": 64, 00:08:08.162 "state": "online", 00:08:08.162 "raid_level": "raid0", 00:08:08.162 "superblock": true, 00:08:08.162 "num_base_bdevs": 3, 00:08:08.162 "num_base_bdevs_discovered": 3, 00:08:08.162 "num_base_bdevs_operational": 3, 00:08:08.162 "base_bdevs_list": [ 00:08:08.162 { 00:08:08.162 "name": "BaseBdev1", 00:08:08.162 "uuid": "7bfc1444-2350-4350-8d09-465a5be281c9", 00:08:08.162 "is_configured": true, 00:08:08.162 "data_offset": 2048, 00:08:08.162 "data_size": 63488 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "name": "BaseBdev2", 00:08:08.162 "uuid": "8d0204be-2727-4cde-bb9a-4ceb3f7e2148", 00:08:08.162 "is_configured": true, 00:08:08.162 "data_offset": 2048, 00:08:08.162 "data_size": 63488 00:08:08.162 }, 00:08:08.162 { 00:08:08.162 "name": "BaseBdev3", 00:08:08.162 "uuid": "211f86b6-dd65-495d-b4a7-6736bac55842", 00:08:08.162 "is_configured": true, 00:08:08.162 "data_offset": 2048, 00:08:08.162 "data_size": 63488 00:08:08.162 } 00:08:08.162 ] 00:08:08.162 } 00:08:08.162 } 00:08:08.162 }' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.162 BaseBdev2 00:08:08.162 BaseBdev3' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.162 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.163 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.163 [2024-11-21 04:54:24.886244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.163 [2024-11-21 04:54:24.886272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.163 [2024-11-21 04:54:24.886340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.422 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.422 "name": "Existed_Raid", 00:08:08.422 "uuid": "377f1c2f-589a-4fcc-a327-3f8f010bc9b1", 00:08:08.422 "strip_size_kb": 64, 00:08:08.422 "state": "offline", 00:08:08.422 "raid_level": "raid0", 00:08:08.422 "superblock": true, 00:08:08.422 "num_base_bdevs": 3, 00:08:08.423 "num_base_bdevs_discovered": 2, 00:08:08.423 "num_base_bdevs_operational": 2, 00:08:08.423 "base_bdevs_list": [ 00:08:08.423 { 00:08:08.423 "name": null, 00:08:08.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.423 "is_configured": false, 00:08:08.423 "data_offset": 0, 00:08:08.423 "data_size": 63488 00:08:08.423 }, 00:08:08.423 { 00:08:08.423 "name": "BaseBdev2", 00:08:08.423 "uuid": "8d0204be-2727-4cde-bb9a-4ceb3f7e2148", 00:08:08.423 "is_configured": true, 00:08:08.423 "data_offset": 2048, 00:08:08.423 "data_size": 63488 00:08:08.423 }, 00:08:08.423 { 00:08:08.423 "name": "BaseBdev3", 00:08:08.423 "uuid": "211f86b6-dd65-495d-b4a7-6736bac55842", 00:08:08.423 "is_configured": true, 00:08:08.423 "data_offset": 2048, 00:08:08.423 "data_size": 63488 00:08:08.423 } 00:08:08.423 ] 00:08:08.423 }' 00:08:08.423 04:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.423 04:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.682 [2024-11-21 04:54:25.373951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.682 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.942 [2024-11-21 04:54:25.435006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.942 [2024-11-21 04:54:25.435082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.942 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.942 BaseBdev2 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 [ 00:08:08.943 { 00:08:08.943 "name": "BaseBdev2", 00:08:08.943 "aliases": [ 00:08:08.943 "5eda5a9a-55e7-46f1-88a2-528707852af1" 00:08:08.943 ], 00:08:08.943 "product_name": "Malloc disk", 00:08:08.943 "block_size": 512, 00:08:08.943 "num_blocks": 65536, 00:08:08.943 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:08.943 "assigned_rate_limits": { 00:08:08.943 "rw_ios_per_sec": 0, 00:08:08.943 "rw_mbytes_per_sec": 0, 00:08:08.943 "r_mbytes_per_sec": 0, 00:08:08.943 "w_mbytes_per_sec": 0 00:08:08.943 }, 00:08:08.943 "claimed": false, 00:08:08.943 "zoned": false, 00:08:08.943 "supported_io_types": { 00:08:08.943 "read": true, 00:08:08.943 "write": true, 00:08:08.943 "unmap": true, 00:08:08.943 "flush": true, 00:08:08.943 "reset": true, 00:08:08.943 "nvme_admin": false, 00:08:08.943 "nvme_io": false, 00:08:08.943 "nvme_io_md": false, 00:08:08.943 "write_zeroes": true, 00:08:08.943 "zcopy": true, 00:08:08.943 "get_zone_info": false, 00:08:08.943 "zone_management": false, 00:08:08.943 "zone_append": false, 00:08:08.943 "compare": false, 00:08:08.943 "compare_and_write": false, 00:08:08.943 "abort": true, 00:08:08.943 "seek_hole": false, 00:08:08.943 "seek_data": false, 00:08:08.943 "copy": true, 00:08:08.943 "nvme_iov_md": false 00:08:08.943 }, 00:08:08.943 "memory_domains": [ 00:08:08.943 { 00:08:08.943 "dma_device_id": "system", 00:08:08.943 "dma_device_type": 1 00:08:08.943 }, 00:08:08.943 { 00:08:08.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.943 "dma_device_type": 2 00:08:08.943 } 00:08:08.943 ], 00:08:08.943 "driver_specific": {} 00:08:08.943 } 00:08:08.943 ] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 BaseBdev3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 [ 00:08:08.943 { 00:08:08.943 "name": "BaseBdev3", 00:08:08.943 "aliases": [ 00:08:08.943 "8e7d6958-9508-4b34-8995-0912caf70c65" 00:08:08.943 ], 00:08:08.943 "product_name": "Malloc disk", 00:08:08.943 "block_size": 512, 00:08:08.943 "num_blocks": 65536, 00:08:08.943 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:08.943 "assigned_rate_limits": { 00:08:08.943 "rw_ios_per_sec": 0, 00:08:08.943 "rw_mbytes_per_sec": 0, 00:08:08.943 "r_mbytes_per_sec": 0, 00:08:08.943 "w_mbytes_per_sec": 0 00:08:08.943 }, 00:08:08.943 "claimed": false, 00:08:08.943 "zoned": false, 00:08:08.943 "supported_io_types": { 00:08:08.943 "read": true, 00:08:08.943 "write": true, 00:08:08.943 "unmap": true, 00:08:08.943 "flush": true, 00:08:08.943 "reset": true, 00:08:08.943 "nvme_admin": false, 00:08:08.943 "nvme_io": false, 00:08:08.943 "nvme_io_md": false, 00:08:08.943 "write_zeroes": true, 00:08:08.943 "zcopy": true, 00:08:08.943 "get_zone_info": false, 00:08:08.943 "zone_management": false, 00:08:08.943 "zone_append": false, 00:08:08.943 "compare": false, 00:08:08.943 "compare_and_write": false, 00:08:08.943 "abort": true, 00:08:08.943 "seek_hole": false, 00:08:08.943 "seek_data": false, 00:08:08.943 "copy": true, 00:08:08.943 "nvme_iov_md": false 00:08:08.943 }, 00:08:08.943 "memory_domains": [ 00:08:08.943 { 00:08:08.943 "dma_device_id": "system", 00:08:08.943 "dma_device_type": 1 00:08:08.943 }, 00:08:08.943 { 00:08:08.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.943 "dma_device_type": 2 00:08:08.943 } 00:08:08.943 ], 00:08:08.943 "driver_specific": {} 00:08:08.943 } 00:08:08.943 ] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 [2024-11-21 04:54:25.624202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.943 [2024-11-21 04:54:25.624273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.943 [2024-11-21 04:54:25.624308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.943 [2024-11-21 04:54:25.626653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.943 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.203 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.203 "name": "Existed_Raid", 00:08:09.203 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:09.203 "strip_size_kb": 64, 00:08:09.203 "state": "configuring", 00:08:09.203 "raid_level": "raid0", 00:08:09.203 "superblock": true, 00:08:09.203 "num_base_bdevs": 3, 00:08:09.203 "num_base_bdevs_discovered": 2, 00:08:09.203 "num_base_bdevs_operational": 3, 00:08:09.203 "base_bdevs_list": [ 00:08:09.203 { 00:08:09.203 "name": "BaseBdev1", 00:08:09.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.203 "is_configured": false, 00:08:09.203 "data_offset": 0, 00:08:09.203 "data_size": 0 00:08:09.203 }, 00:08:09.203 { 00:08:09.203 "name": "BaseBdev2", 00:08:09.203 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:09.203 "is_configured": true, 00:08:09.203 "data_offset": 2048, 00:08:09.203 "data_size": 63488 00:08:09.203 }, 00:08:09.203 { 00:08:09.203 "name": "BaseBdev3", 00:08:09.203 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:09.203 "is_configured": true, 00:08:09.203 "data_offset": 2048, 00:08:09.203 "data_size": 63488 00:08:09.203 } 00:08:09.203 ] 00:08:09.203 }' 00:08:09.203 04:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.203 04:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 [2024-11-21 04:54:26.109608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.462 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.462 "name": "Existed_Raid", 00:08:09.462 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:09.462 "strip_size_kb": 64, 00:08:09.462 "state": "configuring", 00:08:09.462 "raid_level": "raid0", 00:08:09.463 "superblock": true, 00:08:09.463 "num_base_bdevs": 3, 00:08:09.463 "num_base_bdevs_discovered": 1, 00:08:09.463 "num_base_bdevs_operational": 3, 00:08:09.463 "base_bdevs_list": [ 00:08:09.463 { 00:08:09.463 "name": "BaseBdev1", 00:08:09.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.463 "is_configured": false, 00:08:09.463 "data_offset": 0, 00:08:09.463 "data_size": 0 00:08:09.463 }, 00:08:09.463 { 00:08:09.463 "name": null, 00:08:09.463 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:09.463 "is_configured": false, 00:08:09.463 "data_offset": 0, 00:08:09.463 "data_size": 63488 00:08:09.463 }, 00:08:09.463 { 00:08:09.463 "name": "BaseBdev3", 00:08:09.463 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:09.463 "is_configured": true, 00:08:09.463 "data_offset": 2048, 00:08:09.463 "data_size": 63488 00:08:09.463 } 00:08:09.463 ] 00:08:09.463 }' 00:08:09.463 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.463 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.032 [2024-11-21 04:54:26.615818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.032 BaseBdev1 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.032 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.032 [ 00:08:10.032 { 00:08:10.032 "name": "BaseBdev1", 00:08:10.032 "aliases": [ 00:08:10.032 "bd374c42-299b-4b10-84cd-81696fd07dc4" 00:08:10.032 ], 00:08:10.032 "product_name": "Malloc disk", 00:08:10.032 "block_size": 512, 00:08:10.032 "num_blocks": 65536, 00:08:10.032 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:10.032 "assigned_rate_limits": { 00:08:10.032 "rw_ios_per_sec": 0, 00:08:10.032 "rw_mbytes_per_sec": 0, 00:08:10.033 "r_mbytes_per_sec": 0, 00:08:10.033 "w_mbytes_per_sec": 0 00:08:10.033 }, 00:08:10.033 "claimed": true, 00:08:10.033 "claim_type": "exclusive_write", 00:08:10.033 "zoned": false, 00:08:10.033 "supported_io_types": { 00:08:10.033 "read": true, 00:08:10.033 "write": true, 00:08:10.033 "unmap": true, 00:08:10.033 "flush": true, 00:08:10.033 "reset": true, 00:08:10.033 "nvme_admin": false, 00:08:10.033 "nvme_io": false, 00:08:10.033 "nvme_io_md": false, 00:08:10.033 "write_zeroes": true, 00:08:10.033 "zcopy": true, 00:08:10.033 "get_zone_info": false, 00:08:10.033 "zone_management": false, 00:08:10.033 "zone_append": false, 00:08:10.033 "compare": false, 00:08:10.033 "compare_and_write": false, 00:08:10.033 "abort": true, 00:08:10.033 "seek_hole": false, 00:08:10.033 "seek_data": false, 00:08:10.033 "copy": true, 00:08:10.033 "nvme_iov_md": false 00:08:10.033 }, 00:08:10.033 "memory_domains": [ 00:08:10.033 { 00:08:10.033 "dma_device_id": "system", 00:08:10.033 "dma_device_type": 1 00:08:10.033 }, 00:08:10.033 { 00:08:10.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.033 "dma_device_type": 2 00:08:10.033 } 00:08:10.033 ], 00:08:10.033 "driver_specific": {} 00:08:10.033 } 00:08:10.033 ] 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.033 "name": "Existed_Raid", 00:08:10.033 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:10.033 "strip_size_kb": 64, 00:08:10.033 "state": "configuring", 00:08:10.033 "raid_level": "raid0", 00:08:10.033 "superblock": true, 00:08:10.033 "num_base_bdevs": 3, 00:08:10.033 "num_base_bdevs_discovered": 2, 00:08:10.033 "num_base_bdevs_operational": 3, 00:08:10.033 "base_bdevs_list": [ 00:08:10.033 { 00:08:10.033 "name": "BaseBdev1", 00:08:10.033 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:10.033 "is_configured": true, 00:08:10.033 "data_offset": 2048, 00:08:10.033 "data_size": 63488 00:08:10.033 }, 00:08:10.033 { 00:08:10.033 "name": null, 00:08:10.033 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:10.033 "is_configured": false, 00:08:10.033 "data_offset": 0, 00:08:10.033 "data_size": 63488 00:08:10.033 }, 00:08:10.033 { 00:08:10.033 "name": "BaseBdev3", 00:08:10.033 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:10.033 "is_configured": true, 00:08:10.033 "data_offset": 2048, 00:08:10.033 "data_size": 63488 00:08:10.033 } 00:08:10.033 ] 00:08:10.033 }' 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.033 04:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.603 [2024-11-21 04:54:27.087191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.603 "name": "Existed_Raid", 00:08:10.603 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:10.603 "strip_size_kb": 64, 00:08:10.603 "state": "configuring", 00:08:10.603 "raid_level": "raid0", 00:08:10.603 "superblock": true, 00:08:10.603 "num_base_bdevs": 3, 00:08:10.603 "num_base_bdevs_discovered": 1, 00:08:10.603 "num_base_bdevs_operational": 3, 00:08:10.603 "base_bdevs_list": [ 00:08:10.603 { 00:08:10.603 "name": "BaseBdev1", 00:08:10.603 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:10.603 "is_configured": true, 00:08:10.603 "data_offset": 2048, 00:08:10.603 "data_size": 63488 00:08:10.603 }, 00:08:10.603 { 00:08:10.603 "name": null, 00:08:10.603 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:10.603 "is_configured": false, 00:08:10.603 "data_offset": 0, 00:08:10.603 "data_size": 63488 00:08:10.603 }, 00:08:10.603 { 00:08:10.603 "name": null, 00:08:10.603 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:10.603 "is_configured": false, 00:08:10.603 "data_offset": 0, 00:08:10.603 "data_size": 63488 00:08:10.603 } 00:08:10.603 ] 00:08:10.603 }' 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.603 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.862 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.862 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:10.862 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.122 [2024-11-21 04:54:27.602287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.122 "name": "Existed_Raid", 00:08:11.122 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:11.122 "strip_size_kb": 64, 00:08:11.122 "state": "configuring", 00:08:11.122 "raid_level": "raid0", 00:08:11.122 "superblock": true, 00:08:11.122 "num_base_bdevs": 3, 00:08:11.122 "num_base_bdevs_discovered": 2, 00:08:11.122 "num_base_bdevs_operational": 3, 00:08:11.122 "base_bdevs_list": [ 00:08:11.122 { 00:08:11.122 "name": "BaseBdev1", 00:08:11.122 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:11.122 "is_configured": true, 00:08:11.122 "data_offset": 2048, 00:08:11.122 "data_size": 63488 00:08:11.122 }, 00:08:11.122 { 00:08:11.122 "name": null, 00:08:11.122 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:11.122 "is_configured": false, 00:08:11.122 "data_offset": 0, 00:08:11.122 "data_size": 63488 00:08:11.122 }, 00:08:11.122 { 00:08:11.122 "name": "BaseBdev3", 00:08:11.122 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:11.122 "is_configured": true, 00:08:11.122 "data_offset": 2048, 00:08:11.122 "data_size": 63488 00:08:11.122 } 00:08:11.122 ] 00:08:11.122 }' 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.122 04:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.383 [2024-11-21 04:54:28.041595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.383 "name": "Existed_Raid", 00:08:11.383 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:11.383 "strip_size_kb": 64, 00:08:11.383 "state": "configuring", 00:08:11.383 "raid_level": "raid0", 00:08:11.383 "superblock": true, 00:08:11.383 "num_base_bdevs": 3, 00:08:11.383 "num_base_bdevs_discovered": 1, 00:08:11.383 "num_base_bdevs_operational": 3, 00:08:11.383 "base_bdevs_list": [ 00:08:11.383 { 00:08:11.383 "name": null, 00:08:11.383 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:11.383 "is_configured": false, 00:08:11.383 "data_offset": 0, 00:08:11.383 "data_size": 63488 00:08:11.383 }, 00:08:11.383 { 00:08:11.383 "name": null, 00:08:11.383 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:11.383 "is_configured": false, 00:08:11.383 "data_offset": 0, 00:08:11.383 "data_size": 63488 00:08:11.383 }, 00:08:11.383 { 00:08:11.383 "name": "BaseBdev3", 00:08:11.383 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:11.383 "is_configured": true, 00:08:11.383 "data_offset": 2048, 00:08:11.383 "data_size": 63488 00:08:11.383 } 00:08:11.383 ] 00:08:11.383 }' 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.383 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.952 [2024-11-21 04:54:28.567148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.952 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.953 "name": "Existed_Raid", 00:08:11.953 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:11.953 "strip_size_kb": 64, 00:08:11.953 "state": "configuring", 00:08:11.953 "raid_level": "raid0", 00:08:11.953 "superblock": true, 00:08:11.953 "num_base_bdevs": 3, 00:08:11.953 "num_base_bdevs_discovered": 2, 00:08:11.953 "num_base_bdevs_operational": 3, 00:08:11.953 "base_bdevs_list": [ 00:08:11.953 { 00:08:11.953 "name": null, 00:08:11.953 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:11.953 "is_configured": false, 00:08:11.953 "data_offset": 0, 00:08:11.953 "data_size": 63488 00:08:11.953 }, 00:08:11.953 { 00:08:11.953 "name": "BaseBdev2", 00:08:11.953 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:11.953 "is_configured": true, 00:08:11.953 "data_offset": 2048, 00:08:11.953 "data_size": 63488 00:08:11.953 }, 00:08:11.953 { 00:08:11.953 "name": "BaseBdev3", 00:08:11.953 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:11.953 "is_configured": true, 00:08:11.953 "data_offset": 2048, 00:08:11.953 "data_size": 63488 00:08:11.953 } 00:08:11.953 ] 00:08:11.953 }' 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.953 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.523 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.523 04:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 04:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bd374c42-299b-4b10-84cd-81696fd07dc4 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 [2024-11-21 04:54:29.097070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:12.523 [2024-11-21 04:54:29.097262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:12.523 [2024-11-21 04:54:29.097279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.523 [2024-11-21 04:54:29.097530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:12.523 NewBaseBdev 00:08:12.523 [2024-11-21 04:54:29.097649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:12.523 [2024-11-21 04:54:29.097657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:12.523 [2024-11-21 04:54:29.097762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.523 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.523 [ 00:08:12.523 { 00:08:12.523 "name": "NewBaseBdev", 00:08:12.523 "aliases": [ 00:08:12.523 "bd374c42-299b-4b10-84cd-81696fd07dc4" 00:08:12.523 ], 00:08:12.523 "product_name": "Malloc disk", 00:08:12.523 "block_size": 512, 00:08:12.523 "num_blocks": 65536, 00:08:12.523 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:12.523 "assigned_rate_limits": { 00:08:12.523 "rw_ios_per_sec": 0, 00:08:12.523 "rw_mbytes_per_sec": 0, 00:08:12.523 "r_mbytes_per_sec": 0, 00:08:12.523 "w_mbytes_per_sec": 0 00:08:12.523 }, 00:08:12.523 "claimed": true, 00:08:12.523 "claim_type": "exclusive_write", 00:08:12.523 "zoned": false, 00:08:12.523 "supported_io_types": { 00:08:12.523 "read": true, 00:08:12.523 "write": true, 00:08:12.523 "unmap": true, 00:08:12.523 "flush": true, 00:08:12.524 "reset": true, 00:08:12.524 "nvme_admin": false, 00:08:12.524 "nvme_io": false, 00:08:12.524 "nvme_io_md": false, 00:08:12.524 "write_zeroes": true, 00:08:12.524 "zcopy": true, 00:08:12.524 "get_zone_info": false, 00:08:12.524 "zone_management": false, 00:08:12.524 "zone_append": false, 00:08:12.524 "compare": false, 00:08:12.524 "compare_and_write": false, 00:08:12.524 "abort": true, 00:08:12.524 "seek_hole": false, 00:08:12.524 "seek_data": false, 00:08:12.524 "copy": true, 00:08:12.524 "nvme_iov_md": false 00:08:12.524 }, 00:08:12.524 "memory_domains": [ 00:08:12.524 { 00:08:12.524 "dma_device_id": "system", 00:08:12.524 "dma_device_type": 1 00:08:12.524 }, 00:08:12.524 { 00:08:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.524 "dma_device_type": 2 00:08:12.524 } 00:08:12.524 ], 00:08:12.524 "driver_specific": {} 00:08:12.524 } 00:08:12.524 ] 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.524 "name": "Existed_Raid", 00:08:12.524 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:12.524 "strip_size_kb": 64, 00:08:12.524 "state": "online", 00:08:12.524 "raid_level": "raid0", 00:08:12.524 "superblock": true, 00:08:12.524 "num_base_bdevs": 3, 00:08:12.524 "num_base_bdevs_discovered": 3, 00:08:12.524 "num_base_bdevs_operational": 3, 00:08:12.524 "base_bdevs_list": [ 00:08:12.524 { 00:08:12.524 "name": "NewBaseBdev", 00:08:12.524 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:12.524 "is_configured": true, 00:08:12.524 "data_offset": 2048, 00:08:12.524 "data_size": 63488 00:08:12.524 }, 00:08:12.524 { 00:08:12.524 "name": "BaseBdev2", 00:08:12.524 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:12.524 "is_configured": true, 00:08:12.524 "data_offset": 2048, 00:08:12.524 "data_size": 63488 00:08:12.524 }, 00:08:12.524 { 00:08:12.524 "name": "BaseBdev3", 00:08:12.524 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:12.524 "is_configured": true, 00:08:12.524 "data_offset": 2048, 00:08:12.524 "data_size": 63488 00:08:12.524 } 00:08:12.524 ] 00:08:12.524 }' 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.524 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 [2024-11-21 04:54:29.572704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.093 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.093 "name": "Existed_Raid", 00:08:13.093 "aliases": [ 00:08:13.093 "9d35f45b-feaf-4c23-9424-48c051933caf" 00:08:13.093 ], 00:08:13.093 "product_name": "Raid Volume", 00:08:13.093 "block_size": 512, 00:08:13.093 "num_blocks": 190464, 00:08:13.093 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:13.093 "assigned_rate_limits": { 00:08:13.093 "rw_ios_per_sec": 0, 00:08:13.093 "rw_mbytes_per_sec": 0, 00:08:13.093 "r_mbytes_per_sec": 0, 00:08:13.093 "w_mbytes_per_sec": 0 00:08:13.093 }, 00:08:13.093 "claimed": false, 00:08:13.093 "zoned": false, 00:08:13.093 "supported_io_types": { 00:08:13.093 "read": true, 00:08:13.093 "write": true, 00:08:13.093 "unmap": true, 00:08:13.093 "flush": true, 00:08:13.093 "reset": true, 00:08:13.093 "nvme_admin": false, 00:08:13.093 "nvme_io": false, 00:08:13.093 "nvme_io_md": false, 00:08:13.093 "write_zeroes": true, 00:08:13.093 "zcopy": false, 00:08:13.093 "get_zone_info": false, 00:08:13.093 "zone_management": false, 00:08:13.093 "zone_append": false, 00:08:13.093 "compare": false, 00:08:13.093 "compare_and_write": false, 00:08:13.093 "abort": false, 00:08:13.093 "seek_hole": false, 00:08:13.093 "seek_data": false, 00:08:13.093 "copy": false, 00:08:13.093 "nvme_iov_md": false 00:08:13.093 }, 00:08:13.093 "memory_domains": [ 00:08:13.093 { 00:08:13.093 "dma_device_id": "system", 00:08:13.093 "dma_device_type": 1 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.093 "dma_device_type": 2 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "dma_device_id": "system", 00:08:13.093 "dma_device_type": 1 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.093 "dma_device_type": 2 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "dma_device_id": "system", 00:08:13.093 "dma_device_type": 1 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.093 "dma_device_type": 2 00:08:13.093 } 00:08:13.093 ], 00:08:13.093 "driver_specific": { 00:08:13.093 "raid": { 00:08:13.093 "uuid": "9d35f45b-feaf-4c23-9424-48c051933caf", 00:08:13.093 "strip_size_kb": 64, 00:08:13.093 "state": "online", 00:08:13.093 "raid_level": "raid0", 00:08:13.093 "superblock": true, 00:08:13.093 "num_base_bdevs": 3, 00:08:13.093 "num_base_bdevs_discovered": 3, 00:08:13.093 "num_base_bdevs_operational": 3, 00:08:13.093 "base_bdevs_list": [ 00:08:13.093 { 00:08:13.093 "name": "NewBaseBdev", 00:08:13.093 "uuid": "bd374c42-299b-4b10-84cd-81696fd07dc4", 00:08:13.093 "is_configured": true, 00:08:13.093 "data_offset": 2048, 00:08:13.093 "data_size": 63488 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "name": "BaseBdev2", 00:08:13.093 "uuid": "5eda5a9a-55e7-46f1-88a2-528707852af1", 00:08:13.093 "is_configured": true, 00:08:13.093 "data_offset": 2048, 00:08:13.093 "data_size": 63488 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "name": "BaseBdev3", 00:08:13.093 "uuid": "8e7d6958-9508-4b34-8995-0912caf70c65", 00:08:13.093 "is_configured": true, 00:08:13.093 "data_offset": 2048, 00:08:13.093 "data_size": 63488 00:08:13.093 } 00:08:13.093 ] 00:08:13.093 } 00:08:13.093 } 00:08:13.093 }' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:13.094 BaseBdev2 00:08:13.094 BaseBdev3' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.094 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 [2024-11-21 04:54:29.855845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.355 [2024-11-21 04:54:29.855880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.355 [2024-11-21 04:54:29.855962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.355 [2024-11-21 04:54:29.856027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.355 [2024-11-21 04:54:29.856042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75792 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75792 ']' 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75792 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75792 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.355 killing process with pid 75792 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75792' 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75792 00:08:13.355 [2024-11-21 04:54:29.903250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.355 04:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75792 00:08:13.355 [2024-11-21 04:54:29.933469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.616 04:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.616 00:08:13.616 real 0m8.790s 00:08:13.616 user 0m14.860s 00:08:13.616 sys 0m1.941s 00:08:13.616 04:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.616 04:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.616 ************************************ 00:08:13.616 END TEST raid_state_function_test_sb 00:08:13.616 ************************************ 00:08:13.616 04:54:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:13.616 04:54:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:13.616 04:54:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.616 04:54:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.616 ************************************ 00:08:13.616 START TEST raid_superblock_test 00:08:13.616 ************************************ 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76396 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76396 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76396 ']' 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.616 04:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.616 [2024-11-21 04:54:30.304617] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:13.616 [2024-11-21 04:54:30.304735] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76396 ] 00:08:13.876 [2024-11-21 04:54:30.474095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.877 [2024-11-21 04:54:30.512945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.877 [2024-11-21 04:54:30.588639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.877 [2024-11-21 04:54:30.588692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.445 malloc1 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.445 [2024-11-21 04:54:31.138717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.445 [2024-11-21 04:54:31.138789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.445 [2024-11-21 04:54:31.138812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.445 [2024-11-21 04:54:31.138834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.445 [2024-11-21 04:54:31.141255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.445 [2024-11-21 04:54:31.141292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.445 pt1 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.445 malloc2 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.445 [2024-11-21 04:54:31.172961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.445 [2024-11-21 04:54:31.173031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.445 [2024-11-21 04:54:31.173045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:14.445 [2024-11-21 04:54:31.173057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.445 [2024-11-21 04:54:31.175399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.445 [2024-11-21 04:54:31.175436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.445 pt2 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.445 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 malloc3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 [2024-11-21 04:54:31.207271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:14.704 [2024-11-21 04:54:31.207318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.704 [2024-11-21 04:54:31.207333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.704 [2024-11-21 04:54:31.207344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.704 [2024-11-21 04:54:31.209614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.704 [2024-11-21 04:54:31.209647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:14.704 pt3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 [2024-11-21 04:54:31.219294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.704 [2024-11-21 04:54:31.221408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.704 [2024-11-21 04:54:31.221482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:14.704 [2024-11-21 04:54:31.221620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:14.704 [2024-11-21 04:54:31.221635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.704 [2024-11-21 04:54:31.221938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:14.704 [2024-11-21 04:54:31.222113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:14.704 [2024-11-21 04:54:31.222135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:14.704 [2024-11-21 04:54:31.222268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.704 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.704 "name": "raid_bdev1", 00:08:14.704 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:14.704 "strip_size_kb": 64, 00:08:14.704 "state": "online", 00:08:14.704 "raid_level": "raid0", 00:08:14.704 "superblock": true, 00:08:14.704 "num_base_bdevs": 3, 00:08:14.704 "num_base_bdevs_discovered": 3, 00:08:14.704 "num_base_bdevs_operational": 3, 00:08:14.704 "base_bdevs_list": [ 00:08:14.704 { 00:08:14.704 "name": "pt1", 00:08:14.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.704 "is_configured": true, 00:08:14.704 "data_offset": 2048, 00:08:14.704 "data_size": 63488 00:08:14.704 }, 00:08:14.704 { 00:08:14.704 "name": "pt2", 00:08:14.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.704 "is_configured": true, 00:08:14.704 "data_offset": 2048, 00:08:14.704 "data_size": 63488 00:08:14.704 }, 00:08:14.704 { 00:08:14.705 "name": "pt3", 00:08:14.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.705 "is_configured": true, 00:08:14.705 "data_offset": 2048, 00:08:14.705 "data_size": 63488 00:08:14.705 } 00:08:14.705 ] 00:08:14.705 }' 00:08:14.705 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.705 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.965 [2024-11-21 04:54:31.650891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.965 "name": "raid_bdev1", 00:08:14.965 "aliases": [ 00:08:14.965 "73677647-a06e-47c0-8712-767f9813ae8c" 00:08:14.965 ], 00:08:14.965 "product_name": "Raid Volume", 00:08:14.965 "block_size": 512, 00:08:14.965 "num_blocks": 190464, 00:08:14.965 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:14.965 "assigned_rate_limits": { 00:08:14.965 "rw_ios_per_sec": 0, 00:08:14.965 "rw_mbytes_per_sec": 0, 00:08:14.965 "r_mbytes_per_sec": 0, 00:08:14.965 "w_mbytes_per_sec": 0 00:08:14.965 }, 00:08:14.965 "claimed": false, 00:08:14.965 "zoned": false, 00:08:14.965 "supported_io_types": { 00:08:14.965 "read": true, 00:08:14.965 "write": true, 00:08:14.965 "unmap": true, 00:08:14.965 "flush": true, 00:08:14.965 "reset": true, 00:08:14.965 "nvme_admin": false, 00:08:14.965 "nvme_io": false, 00:08:14.965 "nvme_io_md": false, 00:08:14.965 "write_zeroes": true, 00:08:14.965 "zcopy": false, 00:08:14.965 "get_zone_info": false, 00:08:14.965 "zone_management": false, 00:08:14.965 "zone_append": false, 00:08:14.965 "compare": false, 00:08:14.965 "compare_and_write": false, 00:08:14.965 "abort": false, 00:08:14.965 "seek_hole": false, 00:08:14.965 "seek_data": false, 00:08:14.965 "copy": false, 00:08:14.965 "nvme_iov_md": false 00:08:14.965 }, 00:08:14.965 "memory_domains": [ 00:08:14.965 { 00:08:14.965 "dma_device_id": "system", 00:08:14.965 "dma_device_type": 1 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.965 "dma_device_type": 2 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "dma_device_id": "system", 00:08:14.965 "dma_device_type": 1 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.965 "dma_device_type": 2 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "dma_device_id": "system", 00:08:14.965 "dma_device_type": 1 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.965 "dma_device_type": 2 00:08:14.965 } 00:08:14.965 ], 00:08:14.965 "driver_specific": { 00:08:14.965 "raid": { 00:08:14.965 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:14.965 "strip_size_kb": 64, 00:08:14.965 "state": "online", 00:08:14.965 "raid_level": "raid0", 00:08:14.965 "superblock": true, 00:08:14.965 "num_base_bdevs": 3, 00:08:14.965 "num_base_bdevs_discovered": 3, 00:08:14.965 "num_base_bdevs_operational": 3, 00:08:14.965 "base_bdevs_list": [ 00:08:14.965 { 00:08:14.965 "name": "pt1", 00:08:14.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.965 "is_configured": true, 00:08:14.965 "data_offset": 2048, 00:08:14.965 "data_size": 63488 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "name": "pt2", 00:08:14.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.965 "is_configured": true, 00:08:14.965 "data_offset": 2048, 00:08:14.965 "data_size": 63488 00:08:14.965 }, 00:08:14.965 { 00:08:14.965 "name": "pt3", 00:08:14.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.965 "is_configured": true, 00:08:14.965 "data_offset": 2048, 00:08:14.965 "data_size": 63488 00:08:14.965 } 00:08:14.965 ] 00:08:14.965 } 00:08:14.965 } 00:08:14.965 }' 00:08:14.965 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.226 pt2 00:08:15.226 pt3' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.226 [2024-11-21 04:54:31.934285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.226 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73677647-a06e-47c0-8712-767f9813ae8c 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73677647-a06e-47c0-8712-767f9813ae8c ']' 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 [2024-11-21 04:54:31.977954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.487 [2024-11-21 04:54:31.977988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.487 [2024-11-21 04:54:31.978076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.487 [2024-11-21 04:54:31.978153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.487 [2024-11-21 04:54:31.978174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:15.487 04:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.487 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.488 [2024-11-21 04:54:32.121712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.488 [2024-11-21 04:54:32.123917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.488 [2024-11-21 04:54:32.123968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:15.488 [2024-11-21 04:54:32.124020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.488 [2024-11-21 04:54:32.124060] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.488 [2024-11-21 04:54:32.124079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:15.488 [2024-11-21 04:54:32.124103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.488 [2024-11-21 04:54:32.124115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:15.488 request: 00:08:15.488 { 00:08:15.488 "name": "raid_bdev1", 00:08:15.488 "raid_level": "raid0", 00:08:15.488 "base_bdevs": [ 00:08:15.488 "malloc1", 00:08:15.488 "malloc2", 00:08:15.488 "malloc3" 00:08:15.488 ], 00:08:15.488 "strip_size_kb": 64, 00:08:15.488 "superblock": false, 00:08:15.488 "method": "bdev_raid_create", 00:08:15.488 "req_id": 1 00:08:15.488 } 00:08:15.488 Got JSON-RPC error response 00:08:15.488 response: 00:08:15.488 { 00:08:15.488 "code": -17, 00:08:15.488 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:15.488 } 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.488 [2024-11-21 04:54:32.185568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.488 [2024-11-21 04:54:32.185629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.488 [2024-11-21 04:54:32.185645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:15.488 [2024-11-21 04:54:32.185657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.488 [2024-11-21 04:54:32.188074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.488 [2024-11-21 04:54:32.188122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.488 [2024-11-21 04:54:32.188187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.488 [2024-11-21 04:54:32.188250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.488 pt1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.488 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.748 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.748 "name": "raid_bdev1", 00:08:15.748 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:15.748 "strip_size_kb": 64, 00:08:15.748 "state": "configuring", 00:08:15.748 "raid_level": "raid0", 00:08:15.748 "superblock": true, 00:08:15.748 "num_base_bdevs": 3, 00:08:15.748 "num_base_bdevs_discovered": 1, 00:08:15.748 "num_base_bdevs_operational": 3, 00:08:15.748 "base_bdevs_list": [ 00:08:15.748 { 00:08:15.748 "name": "pt1", 00:08:15.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.748 "is_configured": true, 00:08:15.748 "data_offset": 2048, 00:08:15.748 "data_size": 63488 00:08:15.748 }, 00:08:15.748 { 00:08:15.748 "name": null, 00:08:15.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.748 "is_configured": false, 00:08:15.748 "data_offset": 2048, 00:08:15.748 "data_size": 63488 00:08:15.748 }, 00:08:15.748 { 00:08:15.748 "name": null, 00:08:15.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.748 "is_configured": false, 00:08:15.748 "data_offset": 2048, 00:08:15.748 "data_size": 63488 00:08:15.748 } 00:08:15.748 ] 00:08:15.748 }' 00:08:15.748 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.748 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.009 [2024-11-21 04:54:32.620981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.009 [2024-11-21 04:54:32.621068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.009 [2024-11-21 04:54:32.621104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:16.009 [2024-11-21 04:54:32.621120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.009 [2024-11-21 04:54:32.621608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.009 [2024-11-21 04:54:32.621629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.009 [2024-11-21 04:54:32.621716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.009 [2024-11-21 04:54:32.621745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.009 pt2 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.009 [2024-11-21 04:54:32.632900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.009 "name": "raid_bdev1", 00:08:16.009 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:16.009 "strip_size_kb": 64, 00:08:16.009 "state": "configuring", 00:08:16.009 "raid_level": "raid0", 00:08:16.009 "superblock": true, 00:08:16.009 "num_base_bdevs": 3, 00:08:16.009 "num_base_bdevs_discovered": 1, 00:08:16.009 "num_base_bdevs_operational": 3, 00:08:16.009 "base_bdevs_list": [ 00:08:16.009 { 00:08:16.009 "name": "pt1", 00:08:16.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.009 "is_configured": true, 00:08:16.009 "data_offset": 2048, 00:08:16.009 "data_size": 63488 00:08:16.009 }, 00:08:16.009 { 00:08:16.009 "name": null, 00:08:16.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.009 "is_configured": false, 00:08:16.009 "data_offset": 0, 00:08:16.009 "data_size": 63488 00:08:16.009 }, 00:08:16.009 { 00:08:16.009 "name": null, 00:08:16.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.009 "is_configured": false, 00:08:16.009 "data_offset": 2048, 00:08:16.009 "data_size": 63488 00:08:16.009 } 00:08:16.009 ] 00:08:16.009 }' 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.009 04:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.579 [2024-11-21 04:54:33.088168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.579 [2024-11-21 04:54:33.088254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.579 [2024-11-21 04:54:33.088279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:16.579 [2024-11-21 04:54:33.088288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.579 [2024-11-21 04:54:33.088766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.579 [2024-11-21 04:54:33.088782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.579 [2024-11-21 04:54:33.088871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.579 [2024-11-21 04:54:33.088894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.579 pt2 00:08:16.579 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.580 [2024-11-21 04:54:33.096083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:16.580 [2024-11-21 04:54:33.096148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.580 [2024-11-21 04:54:33.096169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:16.580 [2024-11-21 04:54:33.096177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.580 [2024-11-21 04:54:33.096578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.580 [2024-11-21 04:54:33.096599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:16.580 [2024-11-21 04:54:33.096662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:16.580 [2024-11-21 04:54:33.096682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:16.580 [2024-11-21 04:54:33.096801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:16.580 [2024-11-21 04:54:33.096814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.580 [2024-11-21 04:54:33.097065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.580 [2024-11-21 04:54:33.097191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:16.580 [2024-11-21 04:54:33.097204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:16.580 [2024-11-21 04:54:33.097310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.580 pt3 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.580 "name": "raid_bdev1", 00:08:16.580 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:16.580 "strip_size_kb": 64, 00:08:16.580 "state": "online", 00:08:16.580 "raid_level": "raid0", 00:08:16.580 "superblock": true, 00:08:16.580 "num_base_bdevs": 3, 00:08:16.580 "num_base_bdevs_discovered": 3, 00:08:16.580 "num_base_bdevs_operational": 3, 00:08:16.580 "base_bdevs_list": [ 00:08:16.580 { 00:08:16.580 "name": "pt1", 00:08:16.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.580 "is_configured": true, 00:08:16.580 "data_offset": 2048, 00:08:16.580 "data_size": 63488 00:08:16.580 }, 00:08:16.580 { 00:08:16.580 "name": "pt2", 00:08:16.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.580 "is_configured": true, 00:08:16.580 "data_offset": 2048, 00:08:16.580 "data_size": 63488 00:08:16.580 }, 00:08:16.580 { 00:08:16.580 "name": "pt3", 00:08:16.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.580 "is_configured": true, 00:08:16.580 "data_offset": 2048, 00:08:16.580 "data_size": 63488 00:08:16.580 } 00:08:16.580 ] 00:08:16.580 }' 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.580 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.841 [2024-11-21 04:54:33.499659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.841 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.841 "name": "raid_bdev1", 00:08:16.841 "aliases": [ 00:08:16.841 "73677647-a06e-47c0-8712-767f9813ae8c" 00:08:16.841 ], 00:08:16.841 "product_name": "Raid Volume", 00:08:16.841 "block_size": 512, 00:08:16.841 "num_blocks": 190464, 00:08:16.841 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:16.841 "assigned_rate_limits": { 00:08:16.841 "rw_ios_per_sec": 0, 00:08:16.841 "rw_mbytes_per_sec": 0, 00:08:16.841 "r_mbytes_per_sec": 0, 00:08:16.841 "w_mbytes_per_sec": 0 00:08:16.841 }, 00:08:16.841 "claimed": false, 00:08:16.841 "zoned": false, 00:08:16.841 "supported_io_types": { 00:08:16.841 "read": true, 00:08:16.841 "write": true, 00:08:16.841 "unmap": true, 00:08:16.841 "flush": true, 00:08:16.841 "reset": true, 00:08:16.841 "nvme_admin": false, 00:08:16.841 "nvme_io": false, 00:08:16.841 "nvme_io_md": false, 00:08:16.841 "write_zeroes": true, 00:08:16.841 "zcopy": false, 00:08:16.841 "get_zone_info": false, 00:08:16.841 "zone_management": false, 00:08:16.841 "zone_append": false, 00:08:16.841 "compare": false, 00:08:16.841 "compare_and_write": false, 00:08:16.841 "abort": false, 00:08:16.841 "seek_hole": false, 00:08:16.841 "seek_data": false, 00:08:16.841 "copy": false, 00:08:16.841 "nvme_iov_md": false 00:08:16.841 }, 00:08:16.841 "memory_domains": [ 00:08:16.841 { 00:08:16.841 "dma_device_id": "system", 00:08:16.841 "dma_device_type": 1 00:08:16.841 }, 00:08:16.841 { 00:08:16.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.841 "dma_device_type": 2 00:08:16.841 }, 00:08:16.841 { 00:08:16.841 "dma_device_id": "system", 00:08:16.841 "dma_device_type": 1 00:08:16.841 }, 00:08:16.841 { 00:08:16.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.841 "dma_device_type": 2 00:08:16.841 }, 00:08:16.841 { 00:08:16.841 "dma_device_id": "system", 00:08:16.841 "dma_device_type": 1 00:08:16.841 }, 00:08:16.841 { 00:08:16.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.841 "dma_device_type": 2 00:08:16.841 } 00:08:16.841 ], 00:08:16.841 "driver_specific": { 00:08:16.841 "raid": { 00:08:16.841 "uuid": "73677647-a06e-47c0-8712-767f9813ae8c", 00:08:16.841 "strip_size_kb": 64, 00:08:16.841 "state": "online", 00:08:16.841 "raid_level": "raid0", 00:08:16.841 "superblock": true, 00:08:16.841 "num_base_bdevs": 3, 00:08:16.841 "num_base_bdevs_discovered": 3, 00:08:16.841 "num_base_bdevs_operational": 3, 00:08:16.841 "base_bdevs_list": [ 00:08:16.841 { 00:08:16.841 "name": "pt1", 00:08:16.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.841 "is_configured": true, 00:08:16.841 "data_offset": 2048, 00:08:16.841 "data_size": 63488 00:08:16.841 }, 00:08:16.841 { 00:08:16.842 "name": "pt2", 00:08:16.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.842 "is_configured": true, 00:08:16.842 "data_offset": 2048, 00:08:16.842 "data_size": 63488 00:08:16.842 }, 00:08:16.842 { 00:08:16.842 "name": "pt3", 00:08:16.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.842 "is_configured": true, 00:08:16.842 "data_offset": 2048, 00:08:16.842 "data_size": 63488 00:08:16.842 } 00:08:16.842 ] 00:08:16.842 } 00:08:16.842 } 00:08:16.842 }' 00:08:16.842 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.842 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:16.842 pt2 00:08:16.842 pt3' 00:08:16.842 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.103 [2024-11-21 04:54:33.735345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73677647-a06e-47c0-8712-767f9813ae8c '!=' 73677647-a06e-47c0-8712-767f9813ae8c ']' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76396 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76396 ']' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76396 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76396 00:08:17.103 killing process with pid 76396 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76396' 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76396 00:08:17.103 04:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76396 00:08:17.103 [2024-11-21 04:54:33.817640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.103 [2024-11-21 04:54:33.817836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.103 [2024-11-21 04:54:33.817927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.103 [2024-11-21 04:54:33.817941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:17.364 [2024-11-21 04:54:33.881501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.624 04:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.624 00:08:17.624 real 0m3.975s 00:08:17.624 user 0m6.116s 00:08:17.624 sys 0m0.921s 00:08:17.624 04:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.624 04:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.624 ************************************ 00:08:17.624 END TEST raid_superblock_test 00:08:17.624 ************************************ 00:08:17.624 04:54:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:17.624 04:54:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.624 04:54:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.624 04:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.624 ************************************ 00:08:17.624 START TEST raid_read_error_test 00:08:17.624 ************************************ 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eMDSQPFN2W 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76638 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76638 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76638 ']' 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.625 04:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.625 [2024-11-21 04:54:34.350159] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:17.625 [2024-11-21 04:54:34.350280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76638 ] 00:08:17.893 [2024-11-21 04:54:34.519704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.893 [2024-11-21 04:54:34.558878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.168 [2024-11-21 04:54:34.634205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.168 [2024-11-21 04:54:34.634259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.738 BaseBdev1_malloc 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.738 true 00:08:18.738 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 [2024-11-21 04:54:35.200314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.739 [2024-11-21 04:54:35.200389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.739 [2024-11-21 04:54:35.200413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.739 [2024-11-21 04:54:35.200422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.739 [2024-11-21 04:54:35.202832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.739 [2024-11-21 04:54:35.202866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.739 BaseBdev1 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 BaseBdev2_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 true 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 [2024-11-21 04:54:35.234808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.739 [2024-11-21 04:54:35.234857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.739 [2024-11-21 04:54:35.234875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.739 [2024-11-21 04:54:35.234883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.739 [2024-11-21 04:54:35.237235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.739 [2024-11-21 04:54:35.237271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.739 BaseBdev2 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 BaseBdev3_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 true 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 [2024-11-21 04:54:35.269259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:18.739 [2024-11-21 04:54:35.269307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.739 [2024-11-21 04:54:35.269327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:18.739 [2024-11-21 04:54:35.269336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.739 [2024-11-21 04:54:35.271695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.739 [2024-11-21 04:54:35.271730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:18.739 BaseBdev3 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 [2024-11-21 04:54:35.277302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.739 [2024-11-21 04:54:35.279384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.739 [2024-11-21 04:54:35.279477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.739 [2024-11-21 04:54:35.279652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:18.739 [2024-11-21 04:54:35.279669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:18.739 [2024-11-21 04:54:35.279955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:18.739 [2024-11-21 04:54:35.280145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:18.739 [2024-11-21 04:54:35.280166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:18.739 [2024-11-21 04:54:35.280327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.739 "name": "raid_bdev1", 00:08:18.739 "uuid": "edb3cf8d-595d-4d47-9ba8-ba88b615ec4e", 00:08:18.739 "strip_size_kb": 64, 00:08:18.739 "state": "online", 00:08:18.739 "raid_level": "raid0", 00:08:18.739 "superblock": true, 00:08:18.739 "num_base_bdevs": 3, 00:08:18.739 "num_base_bdevs_discovered": 3, 00:08:18.739 "num_base_bdevs_operational": 3, 00:08:18.739 "base_bdevs_list": [ 00:08:18.739 { 00:08:18.739 "name": "BaseBdev1", 00:08:18.739 "uuid": "3bbf373b-dab9-5db0-94fa-87b1b51893fe", 00:08:18.739 "is_configured": true, 00:08:18.739 "data_offset": 2048, 00:08:18.739 "data_size": 63488 00:08:18.739 }, 00:08:18.739 { 00:08:18.739 "name": "BaseBdev2", 00:08:18.739 "uuid": "c9a5e95e-6b94-52fe-8136-728c03a20cf9", 00:08:18.739 "is_configured": true, 00:08:18.739 "data_offset": 2048, 00:08:18.739 "data_size": 63488 00:08:18.739 }, 00:08:18.739 { 00:08:18.739 "name": "BaseBdev3", 00:08:18.739 "uuid": "d143ea1a-fcdb-5e5a-9516-5c2baeee3cee", 00:08:18.739 "is_configured": true, 00:08:18.739 "data_offset": 2048, 00:08:18.739 "data_size": 63488 00:08:18.739 } 00:08:18.739 ] 00:08:18.739 }' 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.739 04:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.309 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.309 04:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.309 [2024-11-21 04:54:35.832933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.248 "name": "raid_bdev1", 00:08:20.248 "uuid": "edb3cf8d-595d-4d47-9ba8-ba88b615ec4e", 00:08:20.248 "strip_size_kb": 64, 00:08:20.248 "state": "online", 00:08:20.248 "raid_level": "raid0", 00:08:20.248 "superblock": true, 00:08:20.248 "num_base_bdevs": 3, 00:08:20.248 "num_base_bdevs_discovered": 3, 00:08:20.248 "num_base_bdevs_operational": 3, 00:08:20.248 "base_bdevs_list": [ 00:08:20.248 { 00:08:20.248 "name": "BaseBdev1", 00:08:20.248 "uuid": "3bbf373b-dab9-5db0-94fa-87b1b51893fe", 00:08:20.248 "is_configured": true, 00:08:20.248 "data_offset": 2048, 00:08:20.248 "data_size": 63488 00:08:20.248 }, 00:08:20.248 { 00:08:20.248 "name": "BaseBdev2", 00:08:20.248 "uuid": "c9a5e95e-6b94-52fe-8136-728c03a20cf9", 00:08:20.248 "is_configured": true, 00:08:20.248 "data_offset": 2048, 00:08:20.248 "data_size": 63488 00:08:20.248 }, 00:08:20.248 { 00:08:20.248 "name": "BaseBdev3", 00:08:20.248 "uuid": "d143ea1a-fcdb-5e5a-9516-5c2baeee3cee", 00:08:20.248 "is_configured": true, 00:08:20.248 "data_offset": 2048, 00:08:20.248 "data_size": 63488 00:08:20.248 } 00:08:20.248 ] 00:08:20.248 }' 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.248 04:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 [2024-11-21 04:54:37.161177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.509 [2024-11-21 04:54:37.161226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.509 [2024-11-21 04:54:37.163764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.509 [2024-11-21 04:54:37.163819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.509 [2024-11-21 04:54:37.163858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.509 [2024-11-21 04:54:37.163870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:20.509 { 00:08:20.509 "results": [ 00:08:20.509 { 00:08:20.509 "job": "raid_bdev1", 00:08:20.509 "core_mask": "0x1", 00:08:20.509 "workload": "randrw", 00:08:20.509 "percentage": 50, 00:08:20.509 "status": "finished", 00:08:20.509 "queue_depth": 1, 00:08:20.509 "io_size": 131072, 00:08:20.509 "runtime": 1.328688, 00:08:20.509 "iops": 15050.937466132003, 00:08:20.509 "mibps": 1881.3671832665004, 00:08:20.509 "io_failed": 1, 00:08:20.509 "io_timeout": 0, 00:08:20.509 "avg_latency_us": 93.45432302182795, 00:08:20.509 "min_latency_us": 22.358078602620086, 00:08:20.509 "max_latency_us": 1366.5257641921398 00:08:20.509 } 00:08:20.509 ], 00:08:20.509 "core_count": 1 00:08:20.509 } 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76638 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76638 ']' 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76638 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76638 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.509 killing process with pid 76638 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76638' 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76638 00:08:20.509 [2024-11-21 04:54:37.210214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.509 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76638 00:08:20.768 [2024-11-21 04:54:37.258130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eMDSQPFN2W 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:21.029 00:08:21.029 real 0m3.320s 00:08:21.029 user 0m4.078s 00:08:21.029 sys 0m0.584s 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.029 04:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.029 ************************************ 00:08:21.029 END TEST raid_read_error_test 00:08:21.029 ************************************ 00:08:21.029 04:54:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:21.029 04:54:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.029 04:54:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.029 04:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.029 ************************************ 00:08:21.029 START TEST raid_write_error_test 00:08:21.029 ************************************ 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sqRByoa2EB 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76767 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76767 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76767 ']' 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.029 04:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.029 [2024-11-21 04:54:37.739180] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:21.029 [2024-11-21 04:54:37.739321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76767 ] 00:08:21.289 [2024-11-21 04:54:37.912583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.289 [2024-11-21 04:54:37.952496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.549 [2024-11-21 04:54:38.027898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.549 [2024-11-21 04:54:38.027944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 BaseBdev1_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 true 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 [2024-11-21 04:54:38.617174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.120 [2024-11-21 04:54:38.617239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.120 [2024-11-21 04:54:38.617259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.120 [2024-11-21 04:54:38.617268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.120 [2024-11-21 04:54:38.619650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.120 [2024-11-21 04:54:38.619683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.120 BaseBdev1 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 BaseBdev2_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 true 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.120 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 [2024-11-21 04:54:38.663543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.120 [2024-11-21 04:54:38.663611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.120 [2024-11-21 04:54:38.663634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.120 [2024-11-21 04:54:38.663643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.121 [2024-11-21 04:54:38.666150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.121 [2024-11-21 04:54:38.666185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.121 BaseBdev2 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 BaseBdev3_malloc 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 true 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 [2024-11-21 04:54:38.710382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.121 [2024-11-21 04:54:38.710450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.121 [2024-11-21 04:54:38.710475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:22.121 [2024-11-21 04:54:38.710484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.121 [2024-11-21 04:54:38.713040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.121 [2024-11-21 04:54:38.713073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.121 BaseBdev3 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 [2024-11-21 04:54:38.722443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.121 [2024-11-21 04:54:38.724667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.121 [2024-11-21 04:54:38.724752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.121 [2024-11-21 04:54:38.724950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:22.121 [2024-11-21 04:54:38.724971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.121 [2024-11-21 04:54:38.725315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:22.121 [2024-11-21 04:54:38.725522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:22.121 [2024-11-21 04:54:38.725544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:22.121 [2024-11-21 04:54:38.725734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.121 "name": "raid_bdev1", 00:08:22.121 "uuid": "f3b0955b-8a49-4368-b3e2-184f59ba1337", 00:08:22.121 "strip_size_kb": 64, 00:08:22.121 "state": "online", 00:08:22.121 "raid_level": "raid0", 00:08:22.121 "superblock": true, 00:08:22.121 "num_base_bdevs": 3, 00:08:22.121 "num_base_bdevs_discovered": 3, 00:08:22.121 "num_base_bdevs_operational": 3, 00:08:22.121 "base_bdevs_list": [ 00:08:22.121 { 00:08:22.121 "name": "BaseBdev1", 00:08:22.121 "uuid": "da40dff1-ae05-5df5-b9b8-596fdec332a3", 00:08:22.121 "is_configured": true, 00:08:22.121 "data_offset": 2048, 00:08:22.121 "data_size": 63488 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "name": "BaseBdev2", 00:08:22.121 "uuid": "18882a34-f204-5ac0-86e7-d9dcfb6732a5", 00:08:22.121 "is_configured": true, 00:08:22.121 "data_offset": 2048, 00:08:22.121 "data_size": 63488 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "name": "BaseBdev3", 00:08:22.121 "uuid": "340449ac-3728-5f20-89ec-93eb9711422d", 00:08:22.121 "is_configured": true, 00:08:22.121 "data_offset": 2048, 00:08:22.121 "data_size": 63488 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 }' 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.121 04:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.691 04:54:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.691 04:54:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.691 [2024-11-21 04:54:39.261908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.629 "name": "raid_bdev1", 00:08:23.629 "uuid": "f3b0955b-8a49-4368-b3e2-184f59ba1337", 00:08:23.629 "strip_size_kb": 64, 00:08:23.629 "state": "online", 00:08:23.629 "raid_level": "raid0", 00:08:23.629 "superblock": true, 00:08:23.629 "num_base_bdevs": 3, 00:08:23.629 "num_base_bdevs_discovered": 3, 00:08:23.629 "num_base_bdevs_operational": 3, 00:08:23.629 "base_bdevs_list": [ 00:08:23.629 { 00:08:23.629 "name": "BaseBdev1", 00:08:23.629 "uuid": "da40dff1-ae05-5df5-b9b8-596fdec332a3", 00:08:23.629 "is_configured": true, 00:08:23.629 "data_offset": 2048, 00:08:23.629 "data_size": 63488 00:08:23.629 }, 00:08:23.629 { 00:08:23.629 "name": "BaseBdev2", 00:08:23.629 "uuid": "18882a34-f204-5ac0-86e7-d9dcfb6732a5", 00:08:23.629 "is_configured": true, 00:08:23.629 "data_offset": 2048, 00:08:23.629 "data_size": 63488 00:08:23.629 }, 00:08:23.629 { 00:08:23.629 "name": "BaseBdev3", 00:08:23.629 "uuid": "340449ac-3728-5f20-89ec-93eb9711422d", 00:08:23.629 "is_configured": true, 00:08:23.629 "data_offset": 2048, 00:08:23.629 "data_size": 63488 00:08:23.629 } 00:08:23.629 ] 00:08:23.629 }' 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.629 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.199 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.199 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.199 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.199 [2024-11-21 04:54:40.662587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.199 [2024-11-21 04:54:40.662637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.199 [2024-11-21 04:54:40.665189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.199 [2024-11-21 04:54:40.665245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.199 [2024-11-21 04:54:40.665285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.199 [2024-11-21 04:54:40.665297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:24.199 { 00:08:24.199 "results": [ 00:08:24.199 { 00:08:24.199 "job": "raid_bdev1", 00:08:24.199 "core_mask": "0x1", 00:08:24.199 "workload": "randrw", 00:08:24.199 "percentage": 50, 00:08:24.199 "status": "finished", 00:08:24.199 "queue_depth": 1, 00:08:24.199 "io_size": 131072, 00:08:24.199 "runtime": 1.401173, 00:08:24.199 "iops": 15106.62851767769, 00:08:24.199 "mibps": 1888.3285647097111, 00:08:24.199 "io_failed": 1, 00:08:24.199 "io_timeout": 0, 00:08:24.199 "avg_latency_us": 92.97755597144244, 00:08:24.199 "min_latency_us": 24.034934497816593, 00:08:24.199 "max_latency_us": 1323.598253275109 00:08:24.199 } 00:08:24.199 ], 00:08:24.199 "core_count": 1 00:08:24.199 } 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76767 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76767 ']' 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76767 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76767 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.200 killing process with pid 76767 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76767' 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76767 00:08:24.200 [2024-11-21 04:54:40.717901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.200 04:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76767 00:08:24.200 [2024-11-21 04:54:40.764410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sqRByoa2EB 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:24.460 00:08:24.460 real 0m3.450s 00:08:24.460 user 0m4.276s 00:08:24.460 sys 0m0.645s 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.460 04:54:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.460 ************************************ 00:08:24.460 END TEST raid_write_error_test 00:08:24.460 ************************************ 00:08:24.460 04:54:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:24.460 04:54:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:24.460 04:54:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.460 04:54:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.460 04:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.460 ************************************ 00:08:24.460 START TEST raid_state_function_test 00:08:24.460 ************************************ 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76900 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.460 Process raid pid: 76900 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76900' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76900 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76900 ']' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.460 04:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.769 [2024-11-21 04:54:41.263370] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:24.769 [2024-11-21 04:54:41.263515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.769 [2024-11-21 04:54:41.440032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.769 [2024-11-21 04:54:41.484260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.029 [2024-11-21 04:54:41.560392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.029 [2024-11-21 04:54:41.560439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.599 [2024-11-21 04:54:42.095400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.599 [2024-11-21 04:54:42.095454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.599 [2024-11-21 04:54:42.095463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.599 [2024-11-21 04:54:42.095474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.599 [2024-11-21 04:54:42.095482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.599 [2024-11-21 04:54:42.095494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.599 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.599 "name": "Existed_Raid", 00:08:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.599 "strip_size_kb": 64, 00:08:25.599 "state": "configuring", 00:08:25.599 "raid_level": "concat", 00:08:25.599 "superblock": false, 00:08:25.599 "num_base_bdevs": 3, 00:08:25.599 "num_base_bdevs_discovered": 0, 00:08:25.599 "num_base_bdevs_operational": 3, 00:08:25.599 "base_bdevs_list": [ 00:08:25.599 { 00:08:25.599 "name": "BaseBdev1", 00:08:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.599 "is_configured": false, 00:08:25.599 "data_offset": 0, 00:08:25.599 "data_size": 0 00:08:25.599 }, 00:08:25.599 { 00:08:25.599 "name": "BaseBdev2", 00:08:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.599 "is_configured": false, 00:08:25.599 "data_offset": 0, 00:08:25.599 "data_size": 0 00:08:25.599 }, 00:08:25.599 { 00:08:25.599 "name": "BaseBdev3", 00:08:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.599 "is_configured": false, 00:08:25.599 "data_offset": 0, 00:08:25.599 "data_size": 0 00:08:25.599 } 00:08:25.599 ] 00:08:25.599 }' 00:08:25.600 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.600 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.859 [2024-11-21 04:54:42.570465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.859 [2024-11-21 04:54:42.570510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.859 [2024-11-21 04:54:42.582452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.859 [2024-11-21 04:54:42.582489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.859 [2024-11-21 04:54:42.582498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.859 [2024-11-21 04:54:42.582507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.859 [2024-11-21 04:54:42.582513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.859 [2024-11-21 04:54:42.582523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.859 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.118 [2024-11-21 04:54:42.609617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.118 BaseBdev1 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.118 [ 00:08:26.118 { 00:08:26.118 "name": "BaseBdev1", 00:08:26.118 "aliases": [ 00:08:26.118 "1e5efe95-fbb4-4c4c-9d23-847418300245" 00:08:26.118 ], 00:08:26.118 "product_name": "Malloc disk", 00:08:26.118 "block_size": 512, 00:08:26.118 "num_blocks": 65536, 00:08:26.118 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:26.118 "assigned_rate_limits": { 00:08:26.118 "rw_ios_per_sec": 0, 00:08:26.118 "rw_mbytes_per_sec": 0, 00:08:26.118 "r_mbytes_per_sec": 0, 00:08:26.118 "w_mbytes_per_sec": 0 00:08:26.118 }, 00:08:26.118 "claimed": true, 00:08:26.118 "claim_type": "exclusive_write", 00:08:26.118 "zoned": false, 00:08:26.118 "supported_io_types": { 00:08:26.118 "read": true, 00:08:26.118 "write": true, 00:08:26.118 "unmap": true, 00:08:26.118 "flush": true, 00:08:26.118 "reset": true, 00:08:26.118 "nvme_admin": false, 00:08:26.118 "nvme_io": false, 00:08:26.118 "nvme_io_md": false, 00:08:26.118 "write_zeroes": true, 00:08:26.118 "zcopy": true, 00:08:26.118 "get_zone_info": false, 00:08:26.118 "zone_management": false, 00:08:26.118 "zone_append": false, 00:08:26.118 "compare": false, 00:08:26.118 "compare_and_write": false, 00:08:26.118 "abort": true, 00:08:26.118 "seek_hole": false, 00:08:26.118 "seek_data": false, 00:08:26.118 "copy": true, 00:08:26.118 "nvme_iov_md": false 00:08:26.118 }, 00:08:26.118 "memory_domains": [ 00:08:26.118 { 00:08:26.118 "dma_device_id": "system", 00:08:26.118 "dma_device_type": 1 00:08:26.118 }, 00:08:26.118 { 00:08:26.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.118 "dma_device_type": 2 00:08:26.118 } 00:08:26.118 ], 00:08:26.118 "driver_specific": {} 00:08:26.118 } 00:08:26.118 ] 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.118 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.119 "name": "Existed_Raid", 00:08:26.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.119 "strip_size_kb": 64, 00:08:26.119 "state": "configuring", 00:08:26.119 "raid_level": "concat", 00:08:26.119 "superblock": false, 00:08:26.119 "num_base_bdevs": 3, 00:08:26.119 "num_base_bdevs_discovered": 1, 00:08:26.119 "num_base_bdevs_operational": 3, 00:08:26.119 "base_bdevs_list": [ 00:08:26.119 { 00:08:26.119 "name": "BaseBdev1", 00:08:26.119 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:26.119 "is_configured": true, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 65536 00:08:26.119 }, 00:08:26.119 { 00:08:26.119 "name": "BaseBdev2", 00:08:26.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.119 "is_configured": false, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 0 00:08:26.119 }, 00:08:26.119 { 00:08:26.119 "name": "BaseBdev3", 00:08:26.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.119 "is_configured": false, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 0 00:08:26.119 } 00:08:26.119 ] 00:08:26.119 }' 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.119 04:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.685 [2024-11-21 04:54:43.128829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.685 [2024-11-21 04:54:43.128915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.685 [2024-11-21 04:54:43.136838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.685 [2024-11-21 04:54:43.139097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.685 [2024-11-21 04:54:43.139155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.685 [2024-11-21 04:54:43.139165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.685 [2024-11-21 04:54:43.139176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.685 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.686 "name": "Existed_Raid", 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.686 "strip_size_kb": 64, 00:08:26.686 "state": "configuring", 00:08:26.686 "raid_level": "concat", 00:08:26.686 "superblock": false, 00:08:26.686 "num_base_bdevs": 3, 00:08:26.686 "num_base_bdevs_discovered": 1, 00:08:26.686 "num_base_bdevs_operational": 3, 00:08:26.686 "base_bdevs_list": [ 00:08:26.686 { 00:08:26.686 "name": "BaseBdev1", 00:08:26.686 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:26.686 "is_configured": true, 00:08:26.686 "data_offset": 0, 00:08:26.686 "data_size": 65536 00:08:26.686 }, 00:08:26.686 { 00:08:26.686 "name": "BaseBdev2", 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.686 "is_configured": false, 00:08:26.686 "data_offset": 0, 00:08:26.686 "data_size": 0 00:08:26.686 }, 00:08:26.686 { 00:08:26.686 "name": "BaseBdev3", 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.686 "is_configured": false, 00:08:26.686 "data_offset": 0, 00:08:26.686 "data_size": 0 00:08:26.686 } 00:08:26.686 ] 00:08:26.686 }' 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.686 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.945 [2024-11-21 04:54:43.560676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.945 BaseBdev2 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.945 [ 00:08:26.945 { 00:08:26.945 "name": "BaseBdev2", 00:08:26.945 "aliases": [ 00:08:26.945 "a1a6fc93-16cd-4556-ad41-370787ab9715" 00:08:26.945 ], 00:08:26.945 "product_name": "Malloc disk", 00:08:26.945 "block_size": 512, 00:08:26.945 "num_blocks": 65536, 00:08:26.945 "uuid": "a1a6fc93-16cd-4556-ad41-370787ab9715", 00:08:26.945 "assigned_rate_limits": { 00:08:26.945 "rw_ios_per_sec": 0, 00:08:26.945 "rw_mbytes_per_sec": 0, 00:08:26.945 "r_mbytes_per_sec": 0, 00:08:26.945 "w_mbytes_per_sec": 0 00:08:26.945 }, 00:08:26.945 "claimed": true, 00:08:26.945 "claim_type": "exclusive_write", 00:08:26.945 "zoned": false, 00:08:26.945 "supported_io_types": { 00:08:26.945 "read": true, 00:08:26.945 "write": true, 00:08:26.945 "unmap": true, 00:08:26.945 "flush": true, 00:08:26.945 "reset": true, 00:08:26.945 "nvme_admin": false, 00:08:26.945 "nvme_io": false, 00:08:26.945 "nvme_io_md": false, 00:08:26.945 "write_zeroes": true, 00:08:26.945 "zcopy": true, 00:08:26.945 "get_zone_info": false, 00:08:26.945 "zone_management": false, 00:08:26.945 "zone_append": false, 00:08:26.945 "compare": false, 00:08:26.945 "compare_and_write": false, 00:08:26.945 "abort": true, 00:08:26.945 "seek_hole": false, 00:08:26.945 "seek_data": false, 00:08:26.945 "copy": true, 00:08:26.945 "nvme_iov_md": false 00:08:26.945 }, 00:08:26.945 "memory_domains": [ 00:08:26.945 { 00:08:26.945 "dma_device_id": "system", 00:08:26.945 "dma_device_type": 1 00:08:26.945 }, 00:08:26.945 { 00:08:26.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.945 "dma_device_type": 2 00:08:26.945 } 00:08:26.945 ], 00:08:26.945 "driver_specific": {} 00:08:26.945 } 00:08:26.945 ] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.945 "name": "Existed_Raid", 00:08:26.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.945 "strip_size_kb": 64, 00:08:26.945 "state": "configuring", 00:08:26.945 "raid_level": "concat", 00:08:26.945 "superblock": false, 00:08:26.945 "num_base_bdevs": 3, 00:08:26.945 "num_base_bdevs_discovered": 2, 00:08:26.945 "num_base_bdevs_operational": 3, 00:08:26.945 "base_bdevs_list": [ 00:08:26.945 { 00:08:26.945 "name": "BaseBdev1", 00:08:26.945 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:26.945 "is_configured": true, 00:08:26.945 "data_offset": 0, 00:08:26.945 "data_size": 65536 00:08:26.945 }, 00:08:26.945 { 00:08:26.945 "name": "BaseBdev2", 00:08:26.945 "uuid": "a1a6fc93-16cd-4556-ad41-370787ab9715", 00:08:26.945 "is_configured": true, 00:08:26.945 "data_offset": 0, 00:08:26.945 "data_size": 65536 00:08:26.945 }, 00:08:26.945 { 00:08:26.945 "name": "BaseBdev3", 00:08:26.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.945 "is_configured": false, 00:08:26.945 "data_offset": 0, 00:08:26.945 "data_size": 0 00:08:26.945 } 00:08:26.945 ] 00:08:26.945 }' 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.945 04:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 [2024-11-21 04:54:44.053337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.514 [2024-11-21 04:54:44.053389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:27.514 [2024-11-21 04:54:44.053402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:27.514 [2024-11-21 04:54:44.053791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:27.514 [2024-11-21 04:54:44.054026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:27.514 [2024-11-21 04:54:44.054053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:27.514 [2024-11-21 04:54:44.054329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.514 BaseBdev3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 [ 00:08:27.514 { 00:08:27.514 "name": "BaseBdev3", 00:08:27.514 "aliases": [ 00:08:27.514 "5006703c-c6dc-42e4-b685-d3babb30f51e" 00:08:27.514 ], 00:08:27.514 "product_name": "Malloc disk", 00:08:27.514 "block_size": 512, 00:08:27.514 "num_blocks": 65536, 00:08:27.514 "uuid": "5006703c-c6dc-42e4-b685-d3babb30f51e", 00:08:27.514 "assigned_rate_limits": { 00:08:27.514 "rw_ios_per_sec": 0, 00:08:27.514 "rw_mbytes_per_sec": 0, 00:08:27.514 "r_mbytes_per_sec": 0, 00:08:27.514 "w_mbytes_per_sec": 0 00:08:27.514 }, 00:08:27.514 "claimed": true, 00:08:27.514 "claim_type": "exclusive_write", 00:08:27.514 "zoned": false, 00:08:27.514 "supported_io_types": { 00:08:27.514 "read": true, 00:08:27.514 "write": true, 00:08:27.514 "unmap": true, 00:08:27.514 "flush": true, 00:08:27.514 "reset": true, 00:08:27.514 "nvme_admin": false, 00:08:27.514 "nvme_io": false, 00:08:27.514 "nvme_io_md": false, 00:08:27.514 "write_zeroes": true, 00:08:27.514 "zcopy": true, 00:08:27.514 "get_zone_info": false, 00:08:27.514 "zone_management": false, 00:08:27.514 "zone_append": false, 00:08:27.514 "compare": false, 00:08:27.514 "compare_and_write": false, 00:08:27.514 "abort": true, 00:08:27.514 "seek_hole": false, 00:08:27.514 "seek_data": false, 00:08:27.514 "copy": true, 00:08:27.514 "nvme_iov_md": false 00:08:27.514 }, 00:08:27.514 "memory_domains": [ 00:08:27.514 { 00:08:27.514 "dma_device_id": "system", 00:08:27.514 "dma_device_type": 1 00:08:27.514 }, 00:08:27.514 { 00:08:27.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.514 "dma_device_type": 2 00:08:27.514 } 00:08:27.514 ], 00:08:27.514 "driver_specific": {} 00:08:27.514 } 00:08:27.514 ] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.514 "name": "Existed_Raid", 00:08:27.514 "uuid": "42b7656c-b1b0-4b9f-b985-e5c0cc3fd103", 00:08:27.514 "strip_size_kb": 64, 00:08:27.514 "state": "online", 00:08:27.514 "raid_level": "concat", 00:08:27.514 "superblock": false, 00:08:27.514 "num_base_bdevs": 3, 00:08:27.514 "num_base_bdevs_discovered": 3, 00:08:27.514 "num_base_bdevs_operational": 3, 00:08:27.514 "base_bdevs_list": [ 00:08:27.514 { 00:08:27.514 "name": "BaseBdev1", 00:08:27.514 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:27.514 "is_configured": true, 00:08:27.514 "data_offset": 0, 00:08:27.514 "data_size": 65536 00:08:27.514 }, 00:08:27.514 { 00:08:27.514 "name": "BaseBdev2", 00:08:27.514 "uuid": "a1a6fc93-16cd-4556-ad41-370787ab9715", 00:08:27.514 "is_configured": true, 00:08:27.514 "data_offset": 0, 00:08:27.514 "data_size": 65536 00:08:27.514 }, 00:08:27.514 { 00:08:27.514 "name": "BaseBdev3", 00:08:27.514 "uuid": "5006703c-c6dc-42e4-b685-d3babb30f51e", 00:08:27.514 "is_configured": true, 00:08:27.514 "data_offset": 0, 00:08:27.514 "data_size": 65536 00:08:27.514 } 00:08:27.514 ] 00:08:27.514 }' 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.514 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.774 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.774 [2024-11-21 04:54:44.496946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.034 "name": "Existed_Raid", 00:08:28.034 "aliases": [ 00:08:28.034 "42b7656c-b1b0-4b9f-b985-e5c0cc3fd103" 00:08:28.034 ], 00:08:28.034 "product_name": "Raid Volume", 00:08:28.034 "block_size": 512, 00:08:28.034 "num_blocks": 196608, 00:08:28.034 "uuid": "42b7656c-b1b0-4b9f-b985-e5c0cc3fd103", 00:08:28.034 "assigned_rate_limits": { 00:08:28.034 "rw_ios_per_sec": 0, 00:08:28.034 "rw_mbytes_per_sec": 0, 00:08:28.034 "r_mbytes_per_sec": 0, 00:08:28.034 "w_mbytes_per_sec": 0 00:08:28.034 }, 00:08:28.034 "claimed": false, 00:08:28.034 "zoned": false, 00:08:28.034 "supported_io_types": { 00:08:28.034 "read": true, 00:08:28.034 "write": true, 00:08:28.034 "unmap": true, 00:08:28.034 "flush": true, 00:08:28.034 "reset": true, 00:08:28.034 "nvme_admin": false, 00:08:28.034 "nvme_io": false, 00:08:28.034 "nvme_io_md": false, 00:08:28.034 "write_zeroes": true, 00:08:28.034 "zcopy": false, 00:08:28.034 "get_zone_info": false, 00:08:28.034 "zone_management": false, 00:08:28.034 "zone_append": false, 00:08:28.034 "compare": false, 00:08:28.034 "compare_and_write": false, 00:08:28.034 "abort": false, 00:08:28.034 "seek_hole": false, 00:08:28.034 "seek_data": false, 00:08:28.034 "copy": false, 00:08:28.034 "nvme_iov_md": false 00:08:28.034 }, 00:08:28.034 "memory_domains": [ 00:08:28.034 { 00:08:28.034 "dma_device_id": "system", 00:08:28.034 "dma_device_type": 1 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.034 "dma_device_type": 2 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "dma_device_id": "system", 00:08:28.034 "dma_device_type": 1 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.034 "dma_device_type": 2 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "dma_device_id": "system", 00:08:28.034 "dma_device_type": 1 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.034 "dma_device_type": 2 00:08:28.034 } 00:08:28.034 ], 00:08:28.034 "driver_specific": { 00:08:28.034 "raid": { 00:08:28.034 "uuid": "42b7656c-b1b0-4b9f-b985-e5c0cc3fd103", 00:08:28.034 "strip_size_kb": 64, 00:08:28.034 "state": "online", 00:08:28.034 "raid_level": "concat", 00:08:28.034 "superblock": false, 00:08:28.034 "num_base_bdevs": 3, 00:08:28.034 "num_base_bdevs_discovered": 3, 00:08:28.034 "num_base_bdevs_operational": 3, 00:08:28.034 "base_bdevs_list": [ 00:08:28.034 { 00:08:28.034 "name": "BaseBdev1", 00:08:28.034 "uuid": "1e5efe95-fbb4-4c4c-9d23-847418300245", 00:08:28.034 "is_configured": true, 00:08:28.034 "data_offset": 0, 00:08:28.034 "data_size": 65536 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "name": "BaseBdev2", 00:08:28.034 "uuid": "a1a6fc93-16cd-4556-ad41-370787ab9715", 00:08:28.034 "is_configured": true, 00:08:28.034 "data_offset": 0, 00:08:28.034 "data_size": 65536 00:08:28.034 }, 00:08:28.034 { 00:08:28.034 "name": "BaseBdev3", 00:08:28.034 "uuid": "5006703c-c6dc-42e4-b685-d3babb30f51e", 00:08:28.034 "is_configured": true, 00:08:28.034 "data_offset": 0, 00:08:28.034 "data_size": 65536 00:08:28.034 } 00:08:28.034 ] 00:08:28.034 } 00:08:28.034 } 00:08:28.034 }' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:28.034 BaseBdev2 00:08:28.034 BaseBdev3' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.034 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.035 [2024-11-21 04:54:44.728290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.035 [2024-11-21 04:54:44.728331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.035 [2024-11-21 04:54:44.728403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.035 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.294 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.294 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.294 "name": "Existed_Raid", 00:08:28.294 "uuid": "42b7656c-b1b0-4b9f-b985-e5c0cc3fd103", 00:08:28.294 "strip_size_kb": 64, 00:08:28.294 "state": "offline", 00:08:28.294 "raid_level": "concat", 00:08:28.294 "superblock": false, 00:08:28.294 "num_base_bdevs": 3, 00:08:28.294 "num_base_bdevs_discovered": 2, 00:08:28.294 "num_base_bdevs_operational": 2, 00:08:28.294 "base_bdevs_list": [ 00:08:28.294 { 00:08:28.294 "name": null, 00:08:28.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.294 "is_configured": false, 00:08:28.294 "data_offset": 0, 00:08:28.294 "data_size": 65536 00:08:28.294 }, 00:08:28.294 { 00:08:28.294 "name": "BaseBdev2", 00:08:28.294 "uuid": "a1a6fc93-16cd-4556-ad41-370787ab9715", 00:08:28.294 "is_configured": true, 00:08:28.294 "data_offset": 0, 00:08:28.294 "data_size": 65536 00:08:28.294 }, 00:08:28.294 { 00:08:28.294 "name": "BaseBdev3", 00:08:28.294 "uuid": "5006703c-c6dc-42e4-b685-d3babb30f51e", 00:08:28.294 "is_configured": true, 00:08:28.294 "data_offset": 0, 00:08:28.294 "data_size": 65536 00:08:28.294 } 00:08:28.294 ] 00:08:28.294 }' 00:08:28.294 04:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.294 04:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 [2024-11-21 04:54:45.215394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.554 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 [2024-11-21 04:54:45.283677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.554 [2024-11-21 04:54:45.283737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:28.815 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.815 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.815 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 BaseBdev2 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 [ 00:08:28.816 { 00:08:28.816 "name": "BaseBdev2", 00:08:28.816 "aliases": [ 00:08:28.816 "d577100e-aaad-4eac-a282-4d0f7a7a6c6b" 00:08:28.816 ], 00:08:28.816 "product_name": "Malloc disk", 00:08:28.816 "block_size": 512, 00:08:28.816 "num_blocks": 65536, 00:08:28.816 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:28.816 "assigned_rate_limits": { 00:08:28.816 "rw_ios_per_sec": 0, 00:08:28.816 "rw_mbytes_per_sec": 0, 00:08:28.816 "r_mbytes_per_sec": 0, 00:08:28.816 "w_mbytes_per_sec": 0 00:08:28.816 }, 00:08:28.816 "claimed": false, 00:08:28.816 "zoned": false, 00:08:28.816 "supported_io_types": { 00:08:28.816 "read": true, 00:08:28.816 "write": true, 00:08:28.816 "unmap": true, 00:08:28.816 "flush": true, 00:08:28.816 "reset": true, 00:08:28.816 "nvme_admin": false, 00:08:28.816 "nvme_io": false, 00:08:28.816 "nvme_io_md": false, 00:08:28.816 "write_zeroes": true, 00:08:28.816 "zcopy": true, 00:08:28.816 "get_zone_info": false, 00:08:28.816 "zone_management": false, 00:08:28.816 "zone_append": false, 00:08:28.816 "compare": false, 00:08:28.816 "compare_and_write": false, 00:08:28.816 "abort": true, 00:08:28.816 "seek_hole": false, 00:08:28.816 "seek_data": false, 00:08:28.816 "copy": true, 00:08:28.816 "nvme_iov_md": false 00:08:28.816 }, 00:08:28.816 "memory_domains": [ 00:08:28.816 { 00:08:28.816 "dma_device_id": "system", 00:08:28.816 "dma_device_type": 1 00:08:28.816 }, 00:08:28.816 { 00:08:28.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.816 "dma_device_type": 2 00:08:28.816 } 00:08:28.816 ], 00:08:28.816 "driver_specific": {} 00:08:28.816 } 00:08:28.816 ] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 BaseBdev3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 [ 00:08:28.816 { 00:08:28.816 "name": "BaseBdev3", 00:08:28.816 "aliases": [ 00:08:28.816 "0b12ce95-61f1-423c-b689-d2d5606556d2" 00:08:28.816 ], 00:08:28.816 "product_name": "Malloc disk", 00:08:28.816 "block_size": 512, 00:08:28.816 "num_blocks": 65536, 00:08:28.816 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:28.816 "assigned_rate_limits": { 00:08:28.816 "rw_ios_per_sec": 0, 00:08:28.816 "rw_mbytes_per_sec": 0, 00:08:28.816 "r_mbytes_per_sec": 0, 00:08:28.816 "w_mbytes_per_sec": 0 00:08:28.816 }, 00:08:28.816 "claimed": false, 00:08:28.816 "zoned": false, 00:08:28.816 "supported_io_types": { 00:08:28.816 "read": true, 00:08:28.816 "write": true, 00:08:28.816 "unmap": true, 00:08:28.816 "flush": true, 00:08:28.816 "reset": true, 00:08:28.816 "nvme_admin": false, 00:08:28.816 "nvme_io": false, 00:08:28.816 "nvme_io_md": false, 00:08:28.816 "write_zeroes": true, 00:08:28.816 "zcopy": true, 00:08:28.816 "get_zone_info": false, 00:08:28.816 "zone_management": false, 00:08:28.816 "zone_append": false, 00:08:28.816 "compare": false, 00:08:28.816 "compare_and_write": false, 00:08:28.816 "abort": true, 00:08:28.816 "seek_hole": false, 00:08:28.816 "seek_data": false, 00:08:28.816 "copy": true, 00:08:28.816 "nvme_iov_md": false 00:08:28.816 }, 00:08:28.816 "memory_domains": [ 00:08:28.816 { 00:08:28.816 "dma_device_id": "system", 00:08:28.816 "dma_device_type": 1 00:08:28.816 }, 00:08:28.816 { 00:08:28.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.816 "dma_device_type": 2 00:08:28.816 } 00:08:28.816 ], 00:08:28.816 "driver_specific": {} 00:08:28.816 } 00:08:28.816 ] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 [2024-11-21 04:54:45.470644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.816 [2024-11-21 04:54:45.470704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.816 [2024-11-21 04:54:45.470725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.816 [2024-11-21 04:54:45.472826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.816 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.817 "name": "Existed_Raid", 00:08:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.817 "strip_size_kb": 64, 00:08:28.817 "state": "configuring", 00:08:28.817 "raid_level": "concat", 00:08:28.817 "superblock": false, 00:08:28.817 "num_base_bdevs": 3, 00:08:28.817 "num_base_bdevs_discovered": 2, 00:08:28.817 "num_base_bdevs_operational": 3, 00:08:28.817 "base_bdevs_list": [ 00:08:28.817 { 00:08:28.817 "name": "BaseBdev1", 00:08:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.817 "is_configured": false, 00:08:28.817 "data_offset": 0, 00:08:28.817 "data_size": 0 00:08:28.817 }, 00:08:28.817 { 00:08:28.817 "name": "BaseBdev2", 00:08:28.817 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:28.817 "is_configured": true, 00:08:28.817 "data_offset": 0, 00:08:28.817 "data_size": 65536 00:08:28.817 }, 00:08:28.817 { 00:08:28.817 "name": "BaseBdev3", 00:08:28.817 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:28.817 "is_configured": true, 00:08:28.817 "data_offset": 0, 00:08:28.817 "data_size": 65536 00:08:28.817 } 00:08:28.817 ] 00:08:28.817 }' 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.817 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.386 [2024-11-21 04:54:45.929942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.386 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.387 "name": "Existed_Raid", 00:08:29.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.387 "strip_size_kb": 64, 00:08:29.387 "state": "configuring", 00:08:29.387 "raid_level": "concat", 00:08:29.387 "superblock": false, 00:08:29.387 "num_base_bdevs": 3, 00:08:29.387 "num_base_bdevs_discovered": 1, 00:08:29.387 "num_base_bdevs_operational": 3, 00:08:29.387 "base_bdevs_list": [ 00:08:29.387 { 00:08:29.387 "name": "BaseBdev1", 00:08:29.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.387 "is_configured": false, 00:08:29.387 "data_offset": 0, 00:08:29.387 "data_size": 0 00:08:29.387 }, 00:08:29.387 { 00:08:29.387 "name": null, 00:08:29.387 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:29.387 "is_configured": false, 00:08:29.387 "data_offset": 0, 00:08:29.387 "data_size": 65536 00:08:29.387 }, 00:08:29.387 { 00:08:29.387 "name": "BaseBdev3", 00:08:29.387 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:29.387 "is_configured": true, 00:08:29.387 "data_offset": 0, 00:08:29.387 "data_size": 65536 00:08:29.387 } 00:08:29.387 ] 00:08:29.387 }' 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.387 04:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.646 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.646 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.646 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.646 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.646 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 [2024-11-21 04:54:46.433775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.907 BaseBdev1 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 [ 00:08:29.907 { 00:08:29.907 "name": "BaseBdev1", 00:08:29.907 "aliases": [ 00:08:29.907 "74c5379d-bc1f-4470-a93e-4ad80436d513" 00:08:29.907 ], 00:08:29.907 "product_name": "Malloc disk", 00:08:29.907 "block_size": 512, 00:08:29.907 "num_blocks": 65536, 00:08:29.907 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:29.907 "assigned_rate_limits": { 00:08:29.907 "rw_ios_per_sec": 0, 00:08:29.907 "rw_mbytes_per_sec": 0, 00:08:29.907 "r_mbytes_per_sec": 0, 00:08:29.907 "w_mbytes_per_sec": 0 00:08:29.907 }, 00:08:29.907 "claimed": true, 00:08:29.907 "claim_type": "exclusive_write", 00:08:29.907 "zoned": false, 00:08:29.907 "supported_io_types": { 00:08:29.907 "read": true, 00:08:29.907 "write": true, 00:08:29.907 "unmap": true, 00:08:29.907 "flush": true, 00:08:29.907 "reset": true, 00:08:29.907 "nvme_admin": false, 00:08:29.907 "nvme_io": false, 00:08:29.907 "nvme_io_md": false, 00:08:29.907 "write_zeroes": true, 00:08:29.907 "zcopy": true, 00:08:29.907 "get_zone_info": false, 00:08:29.907 "zone_management": false, 00:08:29.907 "zone_append": false, 00:08:29.907 "compare": false, 00:08:29.907 "compare_and_write": false, 00:08:29.907 "abort": true, 00:08:29.907 "seek_hole": false, 00:08:29.907 "seek_data": false, 00:08:29.907 "copy": true, 00:08:29.907 "nvme_iov_md": false 00:08:29.907 }, 00:08:29.907 "memory_domains": [ 00:08:29.907 { 00:08:29.907 "dma_device_id": "system", 00:08:29.907 "dma_device_type": 1 00:08:29.907 }, 00:08:29.907 { 00:08:29.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.907 "dma_device_type": 2 00:08:29.907 } 00:08:29.907 ], 00:08:29.907 "driver_specific": {} 00:08:29.907 } 00:08:29.907 ] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.907 "name": "Existed_Raid", 00:08:29.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.907 "strip_size_kb": 64, 00:08:29.907 "state": "configuring", 00:08:29.907 "raid_level": "concat", 00:08:29.907 "superblock": false, 00:08:29.907 "num_base_bdevs": 3, 00:08:29.907 "num_base_bdevs_discovered": 2, 00:08:29.907 "num_base_bdevs_operational": 3, 00:08:29.907 "base_bdevs_list": [ 00:08:29.907 { 00:08:29.907 "name": "BaseBdev1", 00:08:29.907 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:29.907 "is_configured": true, 00:08:29.907 "data_offset": 0, 00:08:29.907 "data_size": 65536 00:08:29.907 }, 00:08:29.907 { 00:08:29.907 "name": null, 00:08:29.907 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:29.907 "is_configured": false, 00:08:29.907 "data_offset": 0, 00:08:29.907 "data_size": 65536 00:08:29.907 }, 00:08:29.907 { 00:08:29.907 "name": "BaseBdev3", 00:08:29.907 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:29.907 "is_configured": true, 00:08:29.907 "data_offset": 0, 00:08:29.907 "data_size": 65536 00:08:29.907 } 00:08:29.907 ] 00:08:29.907 }' 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.907 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.167 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.167 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.167 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.427 [2024-11-21 04:54:46.913003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.427 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.427 "name": "Existed_Raid", 00:08:30.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.427 "strip_size_kb": 64, 00:08:30.427 "state": "configuring", 00:08:30.427 "raid_level": "concat", 00:08:30.427 "superblock": false, 00:08:30.427 "num_base_bdevs": 3, 00:08:30.427 "num_base_bdevs_discovered": 1, 00:08:30.427 "num_base_bdevs_operational": 3, 00:08:30.427 "base_bdevs_list": [ 00:08:30.427 { 00:08:30.427 "name": "BaseBdev1", 00:08:30.427 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:30.427 "is_configured": true, 00:08:30.427 "data_offset": 0, 00:08:30.427 "data_size": 65536 00:08:30.427 }, 00:08:30.427 { 00:08:30.427 "name": null, 00:08:30.427 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:30.427 "is_configured": false, 00:08:30.427 "data_offset": 0, 00:08:30.427 "data_size": 65536 00:08:30.427 }, 00:08:30.427 { 00:08:30.427 "name": null, 00:08:30.428 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:30.428 "is_configured": false, 00:08:30.428 "data_offset": 0, 00:08:30.428 "data_size": 65536 00:08:30.428 } 00:08:30.428 ] 00:08:30.428 }' 00:08:30.428 04:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.428 04:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.687 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.687 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.687 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.687 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.687 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.961 [2024-11-21 04:54:47.444127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.961 "name": "Existed_Raid", 00:08:30.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.961 "strip_size_kb": 64, 00:08:30.961 "state": "configuring", 00:08:30.961 "raid_level": "concat", 00:08:30.961 "superblock": false, 00:08:30.961 "num_base_bdevs": 3, 00:08:30.961 "num_base_bdevs_discovered": 2, 00:08:30.961 "num_base_bdevs_operational": 3, 00:08:30.961 "base_bdevs_list": [ 00:08:30.961 { 00:08:30.961 "name": "BaseBdev1", 00:08:30.961 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:30.961 "is_configured": true, 00:08:30.961 "data_offset": 0, 00:08:30.961 "data_size": 65536 00:08:30.961 }, 00:08:30.961 { 00:08:30.961 "name": null, 00:08:30.961 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:30.961 "is_configured": false, 00:08:30.961 "data_offset": 0, 00:08:30.961 "data_size": 65536 00:08:30.961 }, 00:08:30.961 { 00:08:30.961 "name": "BaseBdev3", 00:08:30.961 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:30.961 "is_configured": true, 00:08:30.961 "data_offset": 0, 00:08:30.961 "data_size": 65536 00:08:30.961 } 00:08:30.961 ] 00:08:30.961 }' 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.961 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.221 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:31.221 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.221 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.221 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.221 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.481 [2024-11-21 04:54:47.971343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.481 04:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.481 "name": "Existed_Raid", 00:08:31.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.481 "strip_size_kb": 64, 00:08:31.481 "state": "configuring", 00:08:31.481 "raid_level": "concat", 00:08:31.481 "superblock": false, 00:08:31.481 "num_base_bdevs": 3, 00:08:31.481 "num_base_bdevs_discovered": 1, 00:08:31.481 "num_base_bdevs_operational": 3, 00:08:31.481 "base_bdevs_list": [ 00:08:31.481 { 00:08:31.481 "name": null, 00:08:31.481 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:31.481 "is_configured": false, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "name": null, 00:08:31.481 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:31.481 "is_configured": false, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "name": "BaseBdev3", 00:08:31.481 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:31.481 "is_configured": true, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 } 00:08:31.481 ] 00:08:31.481 }' 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.481 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.050 [2024-11-21 04:54:48.525725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.050 "name": "Existed_Raid", 00:08:32.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.050 "strip_size_kb": 64, 00:08:32.050 "state": "configuring", 00:08:32.050 "raid_level": "concat", 00:08:32.050 "superblock": false, 00:08:32.050 "num_base_bdevs": 3, 00:08:32.050 "num_base_bdevs_discovered": 2, 00:08:32.050 "num_base_bdevs_operational": 3, 00:08:32.050 "base_bdevs_list": [ 00:08:32.050 { 00:08:32.050 "name": null, 00:08:32.050 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:32.050 "is_configured": false, 00:08:32.050 "data_offset": 0, 00:08:32.050 "data_size": 65536 00:08:32.050 }, 00:08:32.050 { 00:08:32.050 "name": "BaseBdev2", 00:08:32.050 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:32.050 "is_configured": true, 00:08:32.050 "data_offset": 0, 00:08:32.050 "data_size": 65536 00:08:32.050 }, 00:08:32.050 { 00:08:32.050 "name": "BaseBdev3", 00:08:32.050 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:32.050 "is_configured": true, 00:08:32.050 "data_offset": 0, 00:08:32.050 "data_size": 65536 00:08:32.050 } 00:08:32.050 ] 00:08:32.050 }' 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.050 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.310 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.310 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.310 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.310 04:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.310 04:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.310 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:32.310 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:32.310 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.310 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.310 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 74c5379d-bc1f-4470-a93e-4ad80436d513 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 [2024-11-21 04:54:49.073763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:32.570 [2024-11-21 04:54:49.073914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:32.570 [2024-11-21 04:54:49.073943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:32.570 [2024-11-21 04:54:49.074350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:32.570 [2024-11-21 04:54:49.074541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:32.570 [2024-11-21 04:54:49.074581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:32.570 [2024-11-21 04:54:49.074866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.570 NewBaseBdev 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 [ 00:08:32.570 { 00:08:32.570 "name": "NewBaseBdev", 00:08:32.570 "aliases": [ 00:08:32.570 "74c5379d-bc1f-4470-a93e-4ad80436d513" 00:08:32.570 ], 00:08:32.570 "product_name": "Malloc disk", 00:08:32.570 "block_size": 512, 00:08:32.570 "num_blocks": 65536, 00:08:32.570 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:32.570 "assigned_rate_limits": { 00:08:32.570 "rw_ios_per_sec": 0, 00:08:32.570 "rw_mbytes_per_sec": 0, 00:08:32.570 "r_mbytes_per_sec": 0, 00:08:32.570 "w_mbytes_per_sec": 0 00:08:32.570 }, 00:08:32.570 "claimed": true, 00:08:32.570 "claim_type": "exclusive_write", 00:08:32.570 "zoned": false, 00:08:32.570 "supported_io_types": { 00:08:32.571 "read": true, 00:08:32.571 "write": true, 00:08:32.571 "unmap": true, 00:08:32.571 "flush": true, 00:08:32.571 "reset": true, 00:08:32.571 "nvme_admin": false, 00:08:32.571 "nvme_io": false, 00:08:32.571 "nvme_io_md": false, 00:08:32.571 "write_zeroes": true, 00:08:32.571 "zcopy": true, 00:08:32.571 "get_zone_info": false, 00:08:32.571 "zone_management": false, 00:08:32.571 "zone_append": false, 00:08:32.571 "compare": false, 00:08:32.571 "compare_and_write": false, 00:08:32.571 "abort": true, 00:08:32.571 "seek_hole": false, 00:08:32.571 "seek_data": false, 00:08:32.571 "copy": true, 00:08:32.571 "nvme_iov_md": false 00:08:32.571 }, 00:08:32.571 "memory_domains": [ 00:08:32.571 { 00:08:32.571 "dma_device_id": "system", 00:08:32.571 "dma_device_type": 1 00:08:32.571 }, 00:08:32.571 { 00:08:32.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.571 "dma_device_type": 2 00:08:32.571 } 00:08:32.571 ], 00:08:32.571 "driver_specific": {} 00:08:32.571 } 00:08:32.571 ] 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.571 "name": "Existed_Raid", 00:08:32.571 "uuid": "a18af79c-c257-488d-a3d7-3ef138c28d50", 00:08:32.571 "strip_size_kb": 64, 00:08:32.571 "state": "online", 00:08:32.571 "raid_level": "concat", 00:08:32.571 "superblock": false, 00:08:32.571 "num_base_bdevs": 3, 00:08:32.571 "num_base_bdevs_discovered": 3, 00:08:32.571 "num_base_bdevs_operational": 3, 00:08:32.571 "base_bdevs_list": [ 00:08:32.571 { 00:08:32.571 "name": "NewBaseBdev", 00:08:32.571 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:32.571 "is_configured": true, 00:08:32.571 "data_offset": 0, 00:08:32.571 "data_size": 65536 00:08:32.571 }, 00:08:32.571 { 00:08:32.571 "name": "BaseBdev2", 00:08:32.571 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:32.571 "is_configured": true, 00:08:32.571 "data_offset": 0, 00:08:32.571 "data_size": 65536 00:08:32.571 }, 00:08:32.571 { 00:08:32.571 "name": "BaseBdev3", 00:08:32.571 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:32.571 "is_configured": true, 00:08:32.571 "data_offset": 0, 00:08:32.571 "data_size": 65536 00:08:32.571 } 00:08:32.571 ] 00:08:32.571 }' 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.571 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.830 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 [2024-11-21 04:54:49.573272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.090 "name": "Existed_Raid", 00:08:33.090 "aliases": [ 00:08:33.090 "a18af79c-c257-488d-a3d7-3ef138c28d50" 00:08:33.090 ], 00:08:33.090 "product_name": "Raid Volume", 00:08:33.090 "block_size": 512, 00:08:33.090 "num_blocks": 196608, 00:08:33.090 "uuid": "a18af79c-c257-488d-a3d7-3ef138c28d50", 00:08:33.090 "assigned_rate_limits": { 00:08:33.090 "rw_ios_per_sec": 0, 00:08:33.090 "rw_mbytes_per_sec": 0, 00:08:33.090 "r_mbytes_per_sec": 0, 00:08:33.090 "w_mbytes_per_sec": 0 00:08:33.090 }, 00:08:33.090 "claimed": false, 00:08:33.090 "zoned": false, 00:08:33.090 "supported_io_types": { 00:08:33.090 "read": true, 00:08:33.090 "write": true, 00:08:33.090 "unmap": true, 00:08:33.090 "flush": true, 00:08:33.090 "reset": true, 00:08:33.090 "nvme_admin": false, 00:08:33.090 "nvme_io": false, 00:08:33.090 "nvme_io_md": false, 00:08:33.090 "write_zeroes": true, 00:08:33.090 "zcopy": false, 00:08:33.090 "get_zone_info": false, 00:08:33.090 "zone_management": false, 00:08:33.090 "zone_append": false, 00:08:33.090 "compare": false, 00:08:33.090 "compare_and_write": false, 00:08:33.090 "abort": false, 00:08:33.090 "seek_hole": false, 00:08:33.090 "seek_data": false, 00:08:33.090 "copy": false, 00:08:33.090 "nvme_iov_md": false 00:08:33.090 }, 00:08:33.090 "memory_domains": [ 00:08:33.090 { 00:08:33.090 "dma_device_id": "system", 00:08:33.090 "dma_device_type": 1 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.090 "dma_device_type": 2 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "dma_device_id": "system", 00:08:33.090 "dma_device_type": 1 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.090 "dma_device_type": 2 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "dma_device_id": "system", 00:08:33.090 "dma_device_type": 1 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.090 "dma_device_type": 2 00:08:33.090 } 00:08:33.090 ], 00:08:33.090 "driver_specific": { 00:08:33.090 "raid": { 00:08:33.090 "uuid": "a18af79c-c257-488d-a3d7-3ef138c28d50", 00:08:33.090 "strip_size_kb": 64, 00:08:33.090 "state": "online", 00:08:33.090 "raid_level": "concat", 00:08:33.090 "superblock": false, 00:08:33.090 "num_base_bdevs": 3, 00:08:33.090 "num_base_bdevs_discovered": 3, 00:08:33.090 "num_base_bdevs_operational": 3, 00:08:33.090 "base_bdevs_list": [ 00:08:33.090 { 00:08:33.090 "name": "NewBaseBdev", 00:08:33.090 "uuid": "74c5379d-bc1f-4470-a93e-4ad80436d513", 00:08:33.090 "is_configured": true, 00:08:33.090 "data_offset": 0, 00:08:33.090 "data_size": 65536 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "name": "BaseBdev2", 00:08:33.090 "uuid": "d577100e-aaad-4eac-a282-4d0f7a7a6c6b", 00:08:33.090 "is_configured": true, 00:08:33.090 "data_offset": 0, 00:08:33.090 "data_size": 65536 00:08:33.090 }, 00:08:33.090 { 00:08:33.090 "name": "BaseBdev3", 00:08:33.090 "uuid": "0b12ce95-61f1-423c-b689-d2d5606556d2", 00:08:33.090 "is_configured": true, 00:08:33.090 "data_offset": 0, 00:08:33.090 "data_size": 65536 00:08:33.090 } 00:08:33.090 ] 00:08:33.090 } 00:08:33.090 } 00:08:33.090 }' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:33.090 BaseBdev2 00:08:33.090 BaseBdev3' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.350 [2024-11-21 04:54:49.852454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.350 [2024-11-21 04:54:49.852523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.350 [2024-11-21 04:54:49.852658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.350 [2024-11-21 04:54:49.852743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.350 [2024-11-21 04:54:49.852815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76900 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76900 ']' 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76900 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76900 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76900' 00:08:33.350 killing process with pid 76900 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76900 00:08:33.350 [2024-11-21 04:54:49.910441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.350 04:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76900 00:08:33.350 [2024-11-21 04:54:49.969220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.609 ************************************ 00:08:33.609 END TEST raid_state_function_test 00:08:33.609 ************************************ 00:08:33.609 04:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:33.609 00:08:33.609 real 0m9.135s 00:08:33.609 user 0m15.365s 00:08:33.609 sys 0m1.908s 00:08:33.609 04:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.609 04:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 04:54:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:33.869 04:54:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.869 04:54:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.869 04:54:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 ************************************ 00:08:33.869 START TEST raid_state_function_test_sb 00:08:33.869 ************************************ 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:33.869 Process raid pid: 77510 00:08:33.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77510 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77510' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77510 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77510 ']' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.869 04:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 [2024-11-21 04:54:50.460434] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:33.869 [2024-11-21 04:54:50.460675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.129 [2024-11-21 04:54:50.633619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.129 [2024-11-21 04:54:50.672738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.129 [2024-11-21 04:54:50.749379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.129 [2024-11-21 04:54:50.749549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.697 [2024-11-21 04:54:51.317149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.697 [2024-11-21 04:54:51.317280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.697 [2024-11-21 04:54:51.317313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.697 [2024-11-21 04:54:51.317337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.697 [2024-11-21 04:54:51.317359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.697 [2024-11-21 04:54:51.317414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.697 "name": "Existed_Raid", 00:08:34.697 "uuid": "8bef7873-5d63-4d57-913a-47d024166fbe", 00:08:34.697 "strip_size_kb": 64, 00:08:34.697 "state": "configuring", 00:08:34.697 "raid_level": "concat", 00:08:34.697 "superblock": true, 00:08:34.697 "num_base_bdevs": 3, 00:08:34.697 "num_base_bdevs_discovered": 0, 00:08:34.697 "num_base_bdevs_operational": 3, 00:08:34.697 "base_bdevs_list": [ 00:08:34.697 { 00:08:34.697 "name": "BaseBdev1", 00:08:34.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.697 "is_configured": false, 00:08:34.697 "data_offset": 0, 00:08:34.697 "data_size": 0 00:08:34.697 }, 00:08:34.697 { 00:08:34.697 "name": "BaseBdev2", 00:08:34.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.697 "is_configured": false, 00:08:34.697 "data_offset": 0, 00:08:34.697 "data_size": 0 00:08:34.697 }, 00:08:34.697 { 00:08:34.697 "name": "BaseBdev3", 00:08:34.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.697 "is_configured": false, 00:08:34.697 "data_offset": 0, 00:08:34.697 "data_size": 0 00:08:34.697 } 00:08:34.697 ] 00:08:34.697 }' 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.697 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 [2024-11-21 04:54:51.760306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.267 [2024-11-21 04:54:51.760365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 [2024-11-21 04:54:51.768266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.267 [2024-11-21 04:54:51.768356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.267 [2024-11-21 04:54:51.768385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.267 [2024-11-21 04:54:51.768410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.267 [2024-11-21 04:54:51.768428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.267 [2024-11-21 04:54:51.768450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 [2024-11-21 04:54:51.791457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.267 BaseBdev1 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 [ 00:08:35.267 { 00:08:35.267 "name": "BaseBdev1", 00:08:35.267 "aliases": [ 00:08:35.267 "2073ac0b-fee7-4be9-a461-ec94db307c80" 00:08:35.267 ], 00:08:35.267 "product_name": "Malloc disk", 00:08:35.267 "block_size": 512, 00:08:35.267 "num_blocks": 65536, 00:08:35.267 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:35.267 "assigned_rate_limits": { 00:08:35.267 "rw_ios_per_sec": 0, 00:08:35.267 "rw_mbytes_per_sec": 0, 00:08:35.267 "r_mbytes_per_sec": 0, 00:08:35.267 "w_mbytes_per_sec": 0 00:08:35.267 }, 00:08:35.267 "claimed": true, 00:08:35.267 "claim_type": "exclusive_write", 00:08:35.267 "zoned": false, 00:08:35.267 "supported_io_types": { 00:08:35.267 "read": true, 00:08:35.267 "write": true, 00:08:35.267 "unmap": true, 00:08:35.267 "flush": true, 00:08:35.267 "reset": true, 00:08:35.267 "nvme_admin": false, 00:08:35.267 "nvme_io": false, 00:08:35.267 "nvme_io_md": false, 00:08:35.267 "write_zeroes": true, 00:08:35.267 "zcopy": true, 00:08:35.267 "get_zone_info": false, 00:08:35.267 "zone_management": false, 00:08:35.267 "zone_append": false, 00:08:35.267 "compare": false, 00:08:35.267 "compare_and_write": false, 00:08:35.267 "abort": true, 00:08:35.267 "seek_hole": false, 00:08:35.267 "seek_data": false, 00:08:35.267 "copy": true, 00:08:35.267 "nvme_iov_md": false 00:08:35.267 }, 00:08:35.267 "memory_domains": [ 00:08:35.267 { 00:08:35.267 "dma_device_id": "system", 00:08:35.267 "dma_device_type": 1 00:08:35.267 }, 00:08:35.267 { 00:08:35.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.267 "dma_device_type": 2 00:08:35.267 } 00:08:35.267 ], 00:08:35.267 "driver_specific": {} 00:08:35.267 } 00:08:35.267 ] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.267 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.268 "name": "Existed_Raid", 00:08:35.268 "uuid": "ccf0dc0d-d8dc-4686-a903-22aae9e62915", 00:08:35.268 "strip_size_kb": 64, 00:08:35.268 "state": "configuring", 00:08:35.268 "raid_level": "concat", 00:08:35.268 "superblock": true, 00:08:35.268 "num_base_bdevs": 3, 00:08:35.268 "num_base_bdevs_discovered": 1, 00:08:35.268 "num_base_bdevs_operational": 3, 00:08:35.268 "base_bdevs_list": [ 00:08:35.268 { 00:08:35.268 "name": "BaseBdev1", 00:08:35.268 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:35.268 "is_configured": true, 00:08:35.268 "data_offset": 2048, 00:08:35.268 "data_size": 63488 00:08:35.268 }, 00:08:35.268 { 00:08:35.268 "name": "BaseBdev2", 00:08:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.268 "is_configured": false, 00:08:35.268 "data_offset": 0, 00:08:35.268 "data_size": 0 00:08:35.268 }, 00:08:35.268 { 00:08:35.268 "name": "BaseBdev3", 00:08:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.268 "is_configured": false, 00:08:35.268 "data_offset": 0, 00:08:35.268 "data_size": 0 00:08:35.268 } 00:08:35.268 ] 00:08:35.268 }' 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.268 04:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 [2024-11-21 04:54:52.274745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.837 [2024-11-21 04:54:52.274867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 [2024-11-21 04:54:52.282746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.837 [2024-11-21 04:54:52.284999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.837 [2024-11-21 04:54:52.285043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.837 [2024-11-21 04:54:52.285053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.837 [2024-11-21 04:54:52.285079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.837 "name": "Existed_Raid", 00:08:35.837 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:35.837 "strip_size_kb": 64, 00:08:35.837 "state": "configuring", 00:08:35.837 "raid_level": "concat", 00:08:35.837 "superblock": true, 00:08:35.837 "num_base_bdevs": 3, 00:08:35.837 "num_base_bdevs_discovered": 1, 00:08:35.837 "num_base_bdevs_operational": 3, 00:08:35.837 "base_bdevs_list": [ 00:08:35.837 { 00:08:35.837 "name": "BaseBdev1", 00:08:35.837 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:35.837 "is_configured": true, 00:08:35.837 "data_offset": 2048, 00:08:35.837 "data_size": 63488 00:08:35.837 }, 00:08:35.837 { 00:08:35.837 "name": "BaseBdev2", 00:08:35.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.837 "is_configured": false, 00:08:35.837 "data_offset": 0, 00:08:35.837 "data_size": 0 00:08:35.837 }, 00:08:35.837 { 00:08:35.837 "name": "BaseBdev3", 00:08:35.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.837 "is_configured": false, 00:08:35.837 "data_offset": 0, 00:08:35.837 "data_size": 0 00:08:35.837 } 00:08:35.837 ] 00:08:35.837 }' 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.837 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 BaseBdev2 00:08:36.097 [2024-11-21 04:54:52.778648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.097 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 [ 00:08:36.097 { 00:08:36.097 "name": "BaseBdev2", 00:08:36.097 "aliases": [ 00:08:36.097 "c8ecb352-7753-4c86-ae99-c9fb91a6b49a" 00:08:36.097 ], 00:08:36.097 "product_name": "Malloc disk", 00:08:36.097 "block_size": 512, 00:08:36.097 "num_blocks": 65536, 00:08:36.097 "uuid": "c8ecb352-7753-4c86-ae99-c9fb91a6b49a", 00:08:36.097 "assigned_rate_limits": { 00:08:36.097 "rw_ios_per_sec": 0, 00:08:36.097 "rw_mbytes_per_sec": 0, 00:08:36.097 "r_mbytes_per_sec": 0, 00:08:36.097 "w_mbytes_per_sec": 0 00:08:36.097 }, 00:08:36.097 "claimed": true, 00:08:36.097 "claim_type": "exclusive_write", 00:08:36.097 "zoned": false, 00:08:36.097 "supported_io_types": { 00:08:36.097 "read": true, 00:08:36.097 "write": true, 00:08:36.097 "unmap": true, 00:08:36.097 "flush": true, 00:08:36.097 "reset": true, 00:08:36.097 "nvme_admin": false, 00:08:36.097 "nvme_io": false, 00:08:36.097 "nvme_io_md": false, 00:08:36.097 "write_zeroes": true, 00:08:36.097 "zcopy": true, 00:08:36.097 "get_zone_info": false, 00:08:36.097 "zone_management": false, 00:08:36.097 "zone_append": false, 00:08:36.097 "compare": false, 00:08:36.097 "compare_and_write": false, 00:08:36.097 "abort": true, 00:08:36.097 "seek_hole": false, 00:08:36.097 "seek_data": false, 00:08:36.097 "copy": true, 00:08:36.097 "nvme_iov_md": false 00:08:36.098 }, 00:08:36.098 "memory_domains": [ 00:08:36.098 { 00:08:36.098 "dma_device_id": "system", 00:08:36.098 "dma_device_type": 1 00:08:36.098 }, 00:08:36.098 { 00:08:36.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.098 "dma_device_type": 2 00:08:36.098 } 00:08:36.098 ], 00:08:36.098 "driver_specific": {} 00:08:36.098 } 00:08:36.098 ] 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.098 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.357 "name": "Existed_Raid", 00:08:36.357 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:36.357 "strip_size_kb": 64, 00:08:36.357 "state": "configuring", 00:08:36.357 "raid_level": "concat", 00:08:36.357 "superblock": true, 00:08:36.357 "num_base_bdevs": 3, 00:08:36.357 "num_base_bdevs_discovered": 2, 00:08:36.357 "num_base_bdevs_operational": 3, 00:08:36.357 "base_bdevs_list": [ 00:08:36.357 { 00:08:36.357 "name": "BaseBdev1", 00:08:36.357 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:36.357 "is_configured": true, 00:08:36.357 "data_offset": 2048, 00:08:36.357 "data_size": 63488 00:08:36.357 }, 00:08:36.357 { 00:08:36.357 "name": "BaseBdev2", 00:08:36.357 "uuid": "c8ecb352-7753-4c86-ae99-c9fb91a6b49a", 00:08:36.357 "is_configured": true, 00:08:36.357 "data_offset": 2048, 00:08:36.357 "data_size": 63488 00:08:36.357 }, 00:08:36.357 { 00:08:36.357 "name": "BaseBdev3", 00:08:36.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.357 "is_configured": false, 00:08:36.357 "data_offset": 0, 00:08:36.357 "data_size": 0 00:08:36.357 } 00:08:36.357 ] 00:08:36.357 }' 00:08:36.357 04:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.357 04:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.618 [2024-11-21 04:54:53.312446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.618 [2024-11-21 04:54:53.312701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:36.618 [2024-11-21 04:54:53.312731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:36.618 [2024-11-21 04:54:53.313151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:36.618 BaseBdev3 00:08:36.618 [2024-11-21 04:54:53.313345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:36.618 [2024-11-21 04:54:53.313420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:36.618 [2024-11-21 04:54:53.313588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.618 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.618 [ 00:08:36.618 { 00:08:36.618 "name": "BaseBdev3", 00:08:36.618 "aliases": [ 00:08:36.618 "314e63b5-18c2-4248-83bd-f72613cae830" 00:08:36.618 ], 00:08:36.618 "product_name": "Malloc disk", 00:08:36.618 "block_size": 512, 00:08:36.618 "num_blocks": 65536, 00:08:36.618 "uuid": "314e63b5-18c2-4248-83bd-f72613cae830", 00:08:36.618 "assigned_rate_limits": { 00:08:36.618 "rw_ios_per_sec": 0, 00:08:36.618 "rw_mbytes_per_sec": 0, 00:08:36.618 "r_mbytes_per_sec": 0, 00:08:36.618 "w_mbytes_per_sec": 0 00:08:36.618 }, 00:08:36.618 "claimed": true, 00:08:36.618 "claim_type": "exclusive_write", 00:08:36.618 "zoned": false, 00:08:36.618 "supported_io_types": { 00:08:36.618 "read": true, 00:08:36.618 "write": true, 00:08:36.618 "unmap": true, 00:08:36.618 "flush": true, 00:08:36.618 "reset": true, 00:08:36.618 "nvme_admin": false, 00:08:36.618 "nvme_io": false, 00:08:36.618 "nvme_io_md": false, 00:08:36.618 "write_zeroes": true, 00:08:36.618 "zcopy": true, 00:08:36.618 "get_zone_info": false, 00:08:36.618 "zone_management": false, 00:08:36.618 "zone_append": false, 00:08:36.618 "compare": false, 00:08:36.618 "compare_and_write": false, 00:08:36.618 "abort": true, 00:08:36.618 "seek_hole": false, 00:08:36.618 "seek_data": false, 00:08:36.618 "copy": true, 00:08:36.618 "nvme_iov_md": false 00:08:36.618 }, 00:08:36.618 "memory_domains": [ 00:08:36.618 { 00:08:36.618 "dma_device_id": "system", 00:08:36.618 "dma_device_type": 1 00:08:36.618 }, 00:08:36.618 { 00:08:36.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.618 "dma_device_type": 2 00:08:36.618 } 00:08:36.618 ], 00:08:36.618 "driver_specific": {} 00:08:36.618 } 00:08:36.618 ] 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.878 "name": "Existed_Raid", 00:08:36.878 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:36.878 "strip_size_kb": 64, 00:08:36.878 "state": "online", 00:08:36.878 "raid_level": "concat", 00:08:36.878 "superblock": true, 00:08:36.878 "num_base_bdevs": 3, 00:08:36.878 "num_base_bdevs_discovered": 3, 00:08:36.878 "num_base_bdevs_operational": 3, 00:08:36.878 "base_bdevs_list": [ 00:08:36.878 { 00:08:36.878 "name": "BaseBdev1", 00:08:36.878 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:36.878 "is_configured": true, 00:08:36.878 "data_offset": 2048, 00:08:36.878 "data_size": 63488 00:08:36.878 }, 00:08:36.878 { 00:08:36.878 "name": "BaseBdev2", 00:08:36.878 "uuid": "c8ecb352-7753-4c86-ae99-c9fb91a6b49a", 00:08:36.878 "is_configured": true, 00:08:36.878 "data_offset": 2048, 00:08:36.878 "data_size": 63488 00:08:36.878 }, 00:08:36.878 { 00:08:36.878 "name": "BaseBdev3", 00:08:36.878 "uuid": "314e63b5-18c2-4248-83bd-f72613cae830", 00:08:36.878 "is_configured": true, 00:08:36.878 "data_offset": 2048, 00:08:36.878 "data_size": 63488 00:08:36.878 } 00:08:36.878 ] 00:08:36.878 }' 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.878 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.138 [2024-11-21 04:54:53.831970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.138 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.397 "name": "Existed_Raid", 00:08:37.397 "aliases": [ 00:08:37.397 "141dae74-e363-412d-b9f4-65bcf10eab61" 00:08:37.397 ], 00:08:37.397 "product_name": "Raid Volume", 00:08:37.397 "block_size": 512, 00:08:37.397 "num_blocks": 190464, 00:08:37.397 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:37.397 "assigned_rate_limits": { 00:08:37.397 "rw_ios_per_sec": 0, 00:08:37.397 "rw_mbytes_per_sec": 0, 00:08:37.397 "r_mbytes_per_sec": 0, 00:08:37.397 "w_mbytes_per_sec": 0 00:08:37.397 }, 00:08:37.397 "claimed": false, 00:08:37.397 "zoned": false, 00:08:37.397 "supported_io_types": { 00:08:37.397 "read": true, 00:08:37.397 "write": true, 00:08:37.397 "unmap": true, 00:08:37.397 "flush": true, 00:08:37.397 "reset": true, 00:08:37.397 "nvme_admin": false, 00:08:37.397 "nvme_io": false, 00:08:37.397 "nvme_io_md": false, 00:08:37.397 "write_zeroes": true, 00:08:37.397 "zcopy": false, 00:08:37.397 "get_zone_info": false, 00:08:37.397 "zone_management": false, 00:08:37.397 "zone_append": false, 00:08:37.397 "compare": false, 00:08:37.397 "compare_and_write": false, 00:08:37.397 "abort": false, 00:08:37.397 "seek_hole": false, 00:08:37.397 "seek_data": false, 00:08:37.397 "copy": false, 00:08:37.397 "nvme_iov_md": false 00:08:37.397 }, 00:08:37.397 "memory_domains": [ 00:08:37.397 { 00:08:37.397 "dma_device_id": "system", 00:08:37.397 "dma_device_type": 1 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.397 "dma_device_type": 2 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "dma_device_id": "system", 00:08:37.397 "dma_device_type": 1 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.397 "dma_device_type": 2 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "dma_device_id": "system", 00:08:37.397 "dma_device_type": 1 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.397 "dma_device_type": 2 00:08:37.397 } 00:08:37.397 ], 00:08:37.397 "driver_specific": { 00:08:37.397 "raid": { 00:08:37.397 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:37.397 "strip_size_kb": 64, 00:08:37.397 "state": "online", 00:08:37.397 "raid_level": "concat", 00:08:37.397 "superblock": true, 00:08:37.397 "num_base_bdevs": 3, 00:08:37.397 "num_base_bdevs_discovered": 3, 00:08:37.397 "num_base_bdevs_operational": 3, 00:08:37.397 "base_bdevs_list": [ 00:08:37.397 { 00:08:37.397 "name": "BaseBdev1", 00:08:37.397 "uuid": "2073ac0b-fee7-4be9-a461-ec94db307c80", 00:08:37.397 "is_configured": true, 00:08:37.397 "data_offset": 2048, 00:08:37.397 "data_size": 63488 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "name": "BaseBdev2", 00:08:37.397 "uuid": "c8ecb352-7753-4c86-ae99-c9fb91a6b49a", 00:08:37.397 "is_configured": true, 00:08:37.397 "data_offset": 2048, 00:08:37.397 "data_size": 63488 00:08:37.397 }, 00:08:37.397 { 00:08:37.397 "name": "BaseBdev3", 00:08:37.397 "uuid": "314e63b5-18c2-4248-83bd-f72613cae830", 00:08:37.397 "is_configured": true, 00:08:37.397 "data_offset": 2048, 00:08:37.397 "data_size": 63488 00:08:37.397 } 00:08:37.397 ] 00:08:37.397 } 00:08:37.397 } 00:08:37.397 }' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.397 BaseBdev2 00:08:37.397 BaseBdev3' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.397 04:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.397 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.398 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 [2024-11-21 04:54:54.123216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.398 [2024-11-21 04:54:54.123293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.398 [2024-11-21 04:54:54.123405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.657 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.657 "name": "Existed_Raid", 00:08:37.657 "uuid": "141dae74-e363-412d-b9f4-65bcf10eab61", 00:08:37.657 "strip_size_kb": 64, 00:08:37.657 "state": "offline", 00:08:37.657 "raid_level": "concat", 00:08:37.657 "superblock": true, 00:08:37.657 "num_base_bdevs": 3, 00:08:37.657 "num_base_bdevs_discovered": 2, 00:08:37.657 "num_base_bdevs_operational": 2, 00:08:37.657 "base_bdevs_list": [ 00:08:37.657 { 00:08:37.657 "name": null, 00:08:37.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.657 "is_configured": false, 00:08:37.658 "data_offset": 0, 00:08:37.658 "data_size": 63488 00:08:37.658 }, 00:08:37.658 { 00:08:37.658 "name": "BaseBdev2", 00:08:37.658 "uuid": "c8ecb352-7753-4c86-ae99-c9fb91a6b49a", 00:08:37.658 "is_configured": true, 00:08:37.658 "data_offset": 2048, 00:08:37.658 "data_size": 63488 00:08:37.658 }, 00:08:37.658 { 00:08:37.658 "name": "BaseBdev3", 00:08:37.658 "uuid": "314e63b5-18c2-4248-83bd-f72613cae830", 00:08:37.658 "is_configured": true, 00:08:37.658 "data_offset": 2048, 00:08:37.658 "data_size": 63488 00:08:37.658 } 00:08:37.658 ] 00:08:37.658 }' 00:08:37.658 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.658 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.917 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 [2024-11-21 04:54:54.642860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.176 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.176 [2024-11-21 04:54:54.698231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.176 [2024-11-21 04:54:54.698283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 BaseBdev2 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 [ 00:08:38.177 { 00:08:38.177 "name": "BaseBdev2", 00:08:38.177 "aliases": [ 00:08:38.177 "7971aeb7-121e-407f-80d6-e2c2d2a92e4b" 00:08:38.177 ], 00:08:38.177 "product_name": "Malloc disk", 00:08:38.177 "block_size": 512, 00:08:38.177 "num_blocks": 65536, 00:08:38.177 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:38.177 "assigned_rate_limits": { 00:08:38.177 "rw_ios_per_sec": 0, 00:08:38.177 "rw_mbytes_per_sec": 0, 00:08:38.177 "r_mbytes_per_sec": 0, 00:08:38.177 "w_mbytes_per_sec": 0 00:08:38.177 }, 00:08:38.177 "claimed": false, 00:08:38.177 "zoned": false, 00:08:38.177 "supported_io_types": { 00:08:38.177 "read": true, 00:08:38.177 "write": true, 00:08:38.177 "unmap": true, 00:08:38.177 "flush": true, 00:08:38.177 "reset": true, 00:08:38.177 "nvme_admin": false, 00:08:38.177 "nvme_io": false, 00:08:38.177 "nvme_io_md": false, 00:08:38.177 "write_zeroes": true, 00:08:38.177 "zcopy": true, 00:08:38.177 "get_zone_info": false, 00:08:38.177 "zone_management": false, 00:08:38.177 "zone_append": false, 00:08:38.177 "compare": false, 00:08:38.177 "compare_and_write": false, 00:08:38.177 "abort": true, 00:08:38.177 "seek_hole": false, 00:08:38.177 "seek_data": false, 00:08:38.177 "copy": true, 00:08:38.177 "nvme_iov_md": false 00:08:38.177 }, 00:08:38.177 "memory_domains": [ 00:08:38.177 { 00:08:38.177 "dma_device_id": "system", 00:08:38.177 "dma_device_type": 1 00:08:38.177 }, 00:08:38.177 { 00:08:38.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.177 "dma_device_type": 2 00:08:38.177 } 00:08:38.177 ], 00:08:38.177 "driver_specific": {} 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 BaseBdev3 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.177 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 [ 00:08:38.177 { 00:08:38.177 "name": "BaseBdev3", 00:08:38.177 "aliases": [ 00:08:38.177 "c28f45b5-0b4d-439a-8a35-fb20e29cf36a" 00:08:38.177 ], 00:08:38.177 "product_name": "Malloc disk", 00:08:38.177 "block_size": 512, 00:08:38.177 "num_blocks": 65536, 00:08:38.177 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:38.177 "assigned_rate_limits": { 00:08:38.177 "rw_ios_per_sec": 0, 00:08:38.177 "rw_mbytes_per_sec": 0, 00:08:38.177 "r_mbytes_per_sec": 0, 00:08:38.177 "w_mbytes_per_sec": 0 00:08:38.177 }, 00:08:38.177 "claimed": false, 00:08:38.177 "zoned": false, 00:08:38.177 "supported_io_types": { 00:08:38.177 "read": true, 00:08:38.177 "write": true, 00:08:38.177 "unmap": true, 00:08:38.177 "flush": true, 00:08:38.177 "reset": true, 00:08:38.177 "nvme_admin": false, 00:08:38.177 "nvme_io": false, 00:08:38.177 "nvme_io_md": false, 00:08:38.177 "write_zeroes": true, 00:08:38.177 "zcopy": true, 00:08:38.177 "get_zone_info": false, 00:08:38.177 "zone_management": false, 00:08:38.177 "zone_append": false, 00:08:38.177 "compare": false, 00:08:38.177 "compare_and_write": false, 00:08:38.177 "abort": true, 00:08:38.177 "seek_hole": false, 00:08:38.177 "seek_data": false, 00:08:38.177 "copy": true, 00:08:38.177 "nvme_iov_md": false 00:08:38.177 }, 00:08:38.177 "memory_domains": [ 00:08:38.178 { 00:08:38.178 "dma_device_id": "system", 00:08:38.178 "dma_device_type": 1 00:08:38.178 }, 00:08:38.178 { 00:08:38.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.178 "dma_device_type": 2 00:08:38.178 } 00:08:38.178 ], 00:08:38.178 "driver_specific": {} 00:08:38.178 } 00:08:38.178 ] 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.178 [2024-11-21 04:54:54.882511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.178 [2024-11-21 04:54:54.882615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.178 [2024-11-21 04:54:54.882663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.178 [2024-11-21 04:54:54.884931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.178 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.437 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.437 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.437 "name": "Existed_Raid", 00:08:38.437 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:38.437 "strip_size_kb": 64, 00:08:38.437 "state": "configuring", 00:08:38.437 "raid_level": "concat", 00:08:38.437 "superblock": true, 00:08:38.437 "num_base_bdevs": 3, 00:08:38.437 "num_base_bdevs_discovered": 2, 00:08:38.437 "num_base_bdevs_operational": 3, 00:08:38.437 "base_bdevs_list": [ 00:08:38.437 { 00:08:38.437 "name": "BaseBdev1", 00:08:38.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.437 "is_configured": false, 00:08:38.437 "data_offset": 0, 00:08:38.437 "data_size": 0 00:08:38.437 }, 00:08:38.437 { 00:08:38.437 "name": "BaseBdev2", 00:08:38.437 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:38.437 "is_configured": true, 00:08:38.437 "data_offset": 2048, 00:08:38.437 "data_size": 63488 00:08:38.437 }, 00:08:38.437 { 00:08:38.437 "name": "BaseBdev3", 00:08:38.437 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:38.437 "is_configured": true, 00:08:38.437 "data_offset": 2048, 00:08:38.437 "data_size": 63488 00:08:38.437 } 00:08:38.437 ] 00:08:38.437 }' 00:08:38.437 04:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.437 04:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.697 [2024-11-21 04:54:55.337722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.697 "name": "Existed_Raid", 00:08:38.697 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:38.697 "strip_size_kb": 64, 00:08:38.697 "state": "configuring", 00:08:38.697 "raid_level": "concat", 00:08:38.697 "superblock": true, 00:08:38.697 "num_base_bdevs": 3, 00:08:38.697 "num_base_bdevs_discovered": 1, 00:08:38.697 "num_base_bdevs_operational": 3, 00:08:38.697 "base_bdevs_list": [ 00:08:38.697 { 00:08:38.697 "name": "BaseBdev1", 00:08:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.697 "is_configured": false, 00:08:38.697 "data_offset": 0, 00:08:38.697 "data_size": 0 00:08:38.697 }, 00:08:38.697 { 00:08:38.697 "name": null, 00:08:38.697 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:38.697 "is_configured": false, 00:08:38.697 "data_offset": 0, 00:08:38.697 "data_size": 63488 00:08:38.697 }, 00:08:38.697 { 00:08:38.697 "name": "BaseBdev3", 00:08:38.697 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:38.697 "is_configured": true, 00:08:38.697 "data_offset": 2048, 00:08:38.697 "data_size": 63488 00:08:38.697 } 00:08:38.697 ] 00:08:38.697 }' 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.697 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.266 [2024-11-21 04:54:55.843677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.266 BaseBdev1 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.266 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.266 [ 00:08:39.266 { 00:08:39.266 "name": "BaseBdev1", 00:08:39.266 "aliases": [ 00:08:39.266 "ac621952-11e5-4170-ad7e-bd0705b9367a" 00:08:39.266 ], 00:08:39.266 "product_name": "Malloc disk", 00:08:39.266 "block_size": 512, 00:08:39.266 "num_blocks": 65536, 00:08:39.266 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:39.266 "assigned_rate_limits": { 00:08:39.266 "rw_ios_per_sec": 0, 00:08:39.267 "rw_mbytes_per_sec": 0, 00:08:39.267 "r_mbytes_per_sec": 0, 00:08:39.267 "w_mbytes_per_sec": 0 00:08:39.267 }, 00:08:39.267 "claimed": true, 00:08:39.267 "claim_type": "exclusive_write", 00:08:39.267 "zoned": false, 00:08:39.267 "supported_io_types": { 00:08:39.267 "read": true, 00:08:39.267 "write": true, 00:08:39.267 "unmap": true, 00:08:39.267 "flush": true, 00:08:39.267 "reset": true, 00:08:39.267 "nvme_admin": false, 00:08:39.267 "nvme_io": false, 00:08:39.267 "nvme_io_md": false, 00:08:39.267 "write_zeroes": true, 00:08:39.267 "zcopy": true, 00:08:39.267 "get_zone_info": false, 00:08:39.267 "zone_management": false, 00:08:39.267 "zone_append": false, 00:08:39.267 "compare": false, 00:08:39.267 "compare_and_write": false, 00:08:39.267 "abort": true, 00:08:39.267 "seek_hole": false, 00:08:39.267 "seek_data": false, 00:08:39.267 "copy": true, 00:08:39.267 "nvme_iov_md": false 00:08:39.267 }, 00:08:39.267 "memory_domains": [ 00:08:39.267 { 00:08:39.267 "dma_device_id": "system", 00:08:39.267 "dma_device_type": 1 00:08:39.267 }, 00:08:39.267 { 00:08:39.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.267 "dma_device_type": 2 00:08:39.267 } 00:08:39.267 ], 00:08:39.267 "driver_specific": {} 00:08:39.267 } 00:08:39.267 ] 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.267 "name": "Existed_Raid", 00:08:39.267 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:39.267 "strip_size_kb": 64, 00:08:39.267 "state": "configuring", 00:08:39.267 "raid_level": "concat", 00:08:39.267 "superblock": true, 00:08:39.267 "num_base_bdevs": 3, 00:08:39.267 "num_base_bdevs_discovered": 2, 00:08:39.267 "num_base_bdevs_operational": 3, 00:08:39.267 "base_bdevs_list": [ 00:08:39.267 { 00:08:39.267 "name": "BaseBdev1", 00:08:39.267 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:39.267 "is_configured": true, 00:08:39.267 "data_offset": 2048, 00:08:39.267 "data_size": 63488 00:08:39.267 }, 00:08:39.267 { 00:08:39.267 "name": null, 00:08:39.267 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:39.267 "is_configured": false, 00:08:39.267 "data_offset": 0, 00:08:39.267 "data_size": 63488 00:08:39.267 }, 00:08:39.267 { 00:08:39.267 "name": "BaseBdev3", 00:08:39.267 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:39.267 "is_configured": true, 00:08:39.267 "data_offset": 2048, 00:08:39.267 "data_size": 63488 00:08:39.267 } 00:08:39.267 ] 00:08:39.267 }' 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.267 04:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.837 [2024-11-21 04:54:56.358969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.837 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.837 "name": "Existed_Raid", 00:08:39.837 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:39.837 "strip_size_kb": 64, 00:08:39.837 "state": "configuring", 00:08:39.838 "raid_level": "concat", 00:08:39.838 "superblock": true, 00:08:39.838 "num_base_bdevs": 3, 00:08:39.838 "num_base_bdevs_discovered": 1, 00:08:39.838 "num_base_bdevs_operational": 3, 00:08:39.838 "base_bdevs_list": [ 00:08:39.838 { 00:08:39.838 "name": "BaseBdev1", 00:08:39.838 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:39.838 "is_configured": true, 00:08:39.838 "data_offset": 2048, 00:08:39.838 "data_size": 63488 00:08:39.838 }, 00:08:39.838 { 00:08:39.838 "name": null, 00:08:39.838 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:39.838 "is_configured": false, 00:08:39.838 "data_offset": 0, 00:08:39.838 "data_size": 63488 00:08:39.838 }, 00:08:39.838 { 00:08:39.838 "name": null, 00:08:39.838 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:39.838 "is_configured": false, 00:08:39.838 "data_offset": 0, 00:08:39.838 "data_size": 63488 00:08:39.838 } 00:08:39.838 ] 00:08:39.838 }' 00:08:39.838 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.838 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.098 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.098 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.358 [2024-11-21 04:54:56.874059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.358 "name": "Existed_Raid", 00:08:40.358 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:40.358 "strip_size_kb": 64, 00:08:40.358 "state": "configuring", 00:08:40.358 "raid_level": "concat", 00:08:40.358 "superblock": true, 00:08:40.358 "num_base_bdevs": 3, 00:08:40.358 "num_base_bdevs_discovered": 2, 00:08:40.358 "num_base_bdevs_operational": 3, 00:08:40.358 "base_bdevs_list": [ 00:08:40.358 { 00:08:40.358 "name": "BaseBdev1", 00:08:40.358 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:40.358 "is_configured": true, 00:08:40.358 "data_offset": 2048, 00:08:40.358 "data_size": 63488 00:08:40.358 }, 00:08:40.358 { 00:08:40.358 "name": null, 00:08:40.358 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:40.358 "is_configured": false, 00:08:40.358 "data_offset": 0, 00:08:40.358 "data_size": 63488 00:08:40.358 }, 00:08:40.358 { 00:08:40.358 "name": "BaseBdev3", 00:08:40.358 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:40.358 "is_configured": true, 00:08:40.358 "data_offset": 2048, 00:08:40.358 "data_size": 63488 00:08:40.358 } 00:08:40.358 ] 00:08:40.358 }' 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.358 04:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.619 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.619 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.619 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.619 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.619 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.879 [2024-11-21 04:54:57.373277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.879 "name": "Existed_Raid", 00:08:40.879 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:40.879 "strip_size_kb": 64, 00:08:40.879 "state": "configuring", 00:08:40.879 "raid_level": "concat", 00:08:40.879 "superblock": true, 00:08:40.879 "num_base_bdevs": 3, 00:08:40.879 "num_base_bdevs_discovered": 1, 00:08:40.879 "num_base_bdevs_operational": 3, 00:08:40.879 "base_bdevs_list": [ 00:08:40.879 { 00:08:40.879 "name": null, 00:08:40.879 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:40.879 "is_configured": false, 00:08:40.879 "data_offset": 0, 00:08:40.879 "data_size": 63488 00:08:40.879 }, 00:08:40.879 { 00:08:40.879 "name": null, 00:08:40.879 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:40.879 "is_configured": false, 00:08:40.879 "data_offset": 0, 00:08:40.879 "data_size": 63488 00:08:40.879 }, 00:08:40.879 { 00:08:40.879 "name": "BaseBdev3", 00:08:40.879 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:40.879 "is_configured": true, 00:08:40.879 "data_offset": 2048, 00:08:40.879 "data_size": 63488 00:08:40.879 } 00:08:40.879 ] 00:08:40.879 }' 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.879 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.139 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.139 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.139 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.139 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.139 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.399 [2024-11-21 04:54:57.899228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.399 "name": "Existed_Raid", 00:08:41.399 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:41.399 "strip_size_kb": 64, 00:08:41.399 "state": "configuring", 00:08:41.399 "raid_level": "concat", 00:08:41.399 "superblock": true, 00:08:41.399 "num_base_bdevs": 3, 00:08:41.399 "num_base_bdevs_discovered": 2, 00:08:41.399 "num_base_bdevs_operational": 3, 00:08:41.399 "base_bdevs_list": [ 00:08:41.399 { 00:08:41.399 "name": null, 00:08:41.399 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:41.399 "is_configured": false, 00:08:41.399 "data_offset": 0, 00:08:41.399 "data_size": 63488 00:08:41.399 }, 00:08:41.399 { 00:08:41.399 "name": "BaseBdev2", 00:08:41.399 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:41.399 "is_configured": true, 00:08:41.399 "data_offset": 2048, 00:08:41.399 "data_size": 63488 00:08:41.399 }, 00:08:41.399 { 00:08:41.399 "name": "BaseBdev3", 00:08:41.399 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:41.399 "is_configured": true, 00:08:41.399 "data_offset": 2048, 00:08:41.399 "data_size": 63488 00:08:41.399 } 00:08:41.399 ] 00:08:41.399 }' 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.399 04:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.658 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.658 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.659 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.659 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ac621952-11e5-4170-ad7e-bd0705b9367a 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 [2024-11-21 04:54:58.465342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:41.919 NewBaseBdev 00:08:41.919 [2024-11-21 04:54:58.465609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:41.919 [2024-11-21 04:54:58.465632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.919 [2024-11-21 04:54:58.465895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:41.919 [2024-11-21 04:54:58.466033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:41.919 [2024-11-21 04:54:58.466042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:41.919 [2024-11-21 04:54:58.466170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 [ 00:08:41.919 { 00:08:41.919 "name": "NewBaseBdev", 00:08:41.919 "aliases": [ 00:08:41.919 "ac621952-11e5-4170-ad7e-bd0705b9367a" 00:08:41.919 ], 00:08:41.919 "product_name": "Malloc disk", 00:08:41.919 "block_size": 512, 00:08:41.919 "num_blocks": 65536, 00:08:41.919 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:41.919 "assigned_rate_limits": { 00:08:41.919 "rw_ios_per_sec": 0, 00:08:41.919 "rw_mbytes_per_sec": 0, 00:08:41.919 "r_mbytes_per_sec": 0, 00:08:41.919 "w_mbytes_per_sec": 0 00:08:41.919 }, 00:08:41.919 "claimed": true, 00:08:41.919 "claim_type": "exclusive_write", 00:08:41.919 "zoned": false, 00:08:41.919 "supported_io_types": { 00:08:41.919 "read": true, 00:08:41.919 "write": true, 00:08:41.919 "unmap": true, 00:08:41.919 "flush": true, 00:08:41.919 "reset": true, 00:08:41.919 "nvme_admin": false, 00:08:41.919 "nvme_io": false, 00:08:41.919 "nvme_io_md": false, 00:08:41.919 "write_zeroes": true, 00:08:41.919 "zcopy": true, 00:08:41.919 "get_zone_info": false, 00:08:41.919 "zone_management": false, 00:08:41.919 "zone_append": false, 00:08:41.919 "compare": false, 00:08:41.919 "compare_and_write": false, 00:08:41.919 "abort": true, 00:08:41.919 "seek_hole": false, 00:08:41.919 "seek_data": false, 00:08:41.919 "copy": true, 00:08:41.919 "nvme_iov_md": false 00:08:41.919 }, 00:08:41.919 "memory_domains": [ 00:08:41.919 { 00:08:41.919 "dma_device_id": "system", 00:08:41.919 "dma_device_type": 1 00:08:41.919 }, 00:08:41.919 { 00:08:41.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.919 "dma_device_type": 2 00:08:41.919 } 00:08:41.919 ], 00:08:41.919 "driver_specific": {} 00:08:41.919 } 00:08:41.919 ] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.919 "name": "Existed_Raid", 00:08:41.919 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:41.919 "strip_size_kb": 64, 00:08:41.919 "state": "online", 00:08:41.919 "raid_level": "concat", 00:08:41.919 "superblock": true, 00:08:41.919 "num_base_bdevs": 3, 00:08:41.919 "num_base_bdevs_discovered": 3, 00:08:41.919 "num_base_bdevs_operational": 3, 00:08:41.919 "base_bdevs_list": [ 00:08:41.919 { 00:08:41.919 "name": "NewBaseBdev", 00:08:41.919 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:41.919 "is_configured": true, 00:08:41.919 "data_offset": 2048, 00:08:41.919 "data_size": 63488 00:08:41.919 }, 00:08:41.919 { 00:08:41.919 "name": "BaseBdev2", 00:08:41.919 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:41.919 "is_configured": true, 00:08:41.919 "data_offset": 2048, 00:08:41.919 "data_size": 63488 00:08:41.919 }, 00:08:41.919 { 00:08:41.919 "name": "BaseBdev3", 00:08:41.919 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:41.919 "is_configured": true, 00:08:41.919 "data_offset": 2048, 00:08:41.919 "data_size": 63488 00:08:41.919 } 00:08:41.919 ] 00:08:41.919 }' 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.919 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.512 04:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.512 [2024-11-21 04:54:59.004781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.512 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.512 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.512 "name": "Existed_Raid", 00:08:42.512 "aliases": [ 00:08:42.512 "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04" 00:08:42.512 ], 00:08:42.512 "product_name": "Raid Volume", 00:08:42.512 "block_size": 512, 00:08:42.512 "num_blocks": 190464, 00:08:42.512 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:42.512 "assigned_rate_limits": { 00:08:42.512 "rw_ios_per_sec": 0, 00:08:42.512 "rw_mbytes_per_sec": 0, 00:08:42.512 "r_mbytes_per_sec": 0, 00:08:42.512 "w_mbytes_per_sec": 0 00:08:42.512 }, 00:08:42.512 "claimed": false, 00:08:42.512 "zoned": false, 00:08:42.512 "supported_io_types": { 00:08:42.512 "read": true, 00:08:42.512 "write": true, 00:08:42.512 "unmap": true, 00:08:42.512 "flush": true, 00:08:42.512 "reset": true, 00:08:42.512 "nvme_admin": false, 00:08:42.512 "nvme_io": false, 00:08:42.512 "nvme_io_md": false, 00:08:42.512 "write_zeroes": true, 00:08:42.512 "zcopy": false, 00:08:42.512 "get_zone_info": false, 00:08:42.512 "zone_management": false, 00:08:42.512 "zone_append": false, 00:08:42.512 "compare": false, 00:08:42.512 "compare_and_write": false, 00:08:42.512 "abort": false, 00:08:42.512 "seek_hole": false, 00:08:42.512 "seek_data": false, 00:08:42.512 "copy": false, 00:08:42.512 "nvme_iov_md": false 00:08:42.512 }, 00:08:42.512 "memory_domains": [ 00:08:42.512 { 00:08:42.512 "dma_device_id": "system", 00:08:42.512 "dma_device_type": 1 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.512 "dma_device_type": 2 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "dma_device_id": "system", 00:08:42.512 "dma_device_type": 1 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.512 "dma_device_type": 2 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "dma_device_id": "system", 00:08:42.512 "dma_device_type": 1 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.512 "dma_device_type": 2 00:08:42.512 } 00:08:42.512 ], 00:08:42.512 "driver_specific": { 00:08:42.512 "raid": { 00:08:42.512 "uuid": "a6c492f8-93f5-4c92-9a45-ac12cdcb8c04", 00:08:42.512 "strip_size_kb": 64, 00:08:42.512 "state": "online", 00:08:42.512 "raid_level": "concat", 00:08:42.512 "superblock": true, 00:08:42.512 "num_base_bdevs": 3, 00:08:42.512 "num_base_bdevs_discovered": 3, 00:08:42.512 "num_base_bdevs_operational": 3, 00:08:42.512 "base_bdevs_list": [ 00:08:42.512 { 00:08:42.512 "name": "NewBaseBdev", 00:08:42.512 "uuid": "ac621952-11e5-4170-ad7e-bd0705b9367a", 00:08:42.512 "is_configured": true, 00:08:42.512 "data_offset": 2048, 00:08:42.512 "data_size": 63488 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "name": "BaseBdev2", 00:08:42.512 "uuid": "7971aeb7-121e-407f-80d6-e2c2d2a92e4b", 00:08:42.512 "is_configured": true, 00:08:42.512 "data_offset": 2048, 00:08:42.512 "data_size": 63488 00:08:42.512 }, 00:08:42.512 { 00:08:42.512 "name": "BaseBdev3", 00:08:42.512 "uuid": "c28f45b5-0b4d-439a-8a35-fb20e29cf36a", 00:08:42.512 "is_configured": true, 00:08:42.512 "data_offset": 2048, 00:08:42.512 "data_size": 63488 00:08:42.512 } 00:08:42.512 ] 00:08:42.512 } 00:08:42.512 } 00:08:42.512 }' 00:08:42.512 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:42.513 BaseBdev2 00:08:42.513 BaseBdev3' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.513 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.771 [2024-11-21 04:54:59.248053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.771 [2024-11-21 04:54:59.248083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.771 [2024-11-21 04:54:59.248176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.771 [2024-11-21 04:54:59.248233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.771 [2024-11-21 04:54:59.248246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77510 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77510 ']' 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77510 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77510 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77510' 00:08:42.771 killing process with pid 77510 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77510 00:08:42.771 [2024-11-21 04:54:59.293067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.771 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77510 00:08:42.771 [2024-11-21 04:54:59.323935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.030 04:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:43.030 00:08:43.030 real 0m9.180s 00:08:43.030 user 0m15.538s 00:08:43.030 sys 0m2.026s 00:08:43.030 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.030 ************************************ 00:08:43.030 END TEST raid_state_function_test_sb 00:08:43.030 ************************************ 00:08:43.030 04:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.030 04:54:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:43.030 04:54:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:43.030 04:54:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.030 04:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.030 ************************************ 00:08:43.030 START TEST raid_superblock_test 00:08:43.030 ************************************ 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78119 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78119 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78119 ']' 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.030 04:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.030 [2024-11-21 04:54:59.711888] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:43.030 [2024-11-21 04:54:59.712657] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78119 ] 00:08:43.289 [2024-11-21 04:54:59.884817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.289 [2024-11-21 04:54:59.910060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.289 [2024-11-21 04:54:59.952100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.289 [2024-11-21 04:54:59.952248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.856 malloc1 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.856 [2024-11-21 04:55:00.566107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.856 [2024-11-21 04:55:00.566241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.856 [2024-11-21 04:55:00.566283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:43.856 [2024-11-21 04:55:00.566318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.856 [2024-11-21 04:55:00.568542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.856 [2024-11-21 04:55:00.568616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.856 pt1 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.856 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.115 malloc2 00:08:44.115 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.115 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 [2024-11-21 04:55:00.599123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.116 [2024-11-21 04:55:00.599261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.116 [2024-11-21 04:55:00.599323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:44.116 [2024-11-21 04:55:00.599369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.116 [2024-11-21 04:55:00.601726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.116 [2024-11-21 04:55:00.601805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.116 pt2 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 malloc3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 [2024-11-21 04:55:00.628403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.116 [2024-11-21 04:55:00.628535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.116 [2024-11-21 04:55:00.628575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:44.116 [2024-11-21 04:55:00.628607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.116 [2024-11-21 04:55:00.630941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.116 [2024-11-21 04:55:00.631021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.116 pt3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 [2024-11-21 04:55:00.640441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.116 [2024-11-21 04:55:00.642546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.116 [2024-11-21 04:55:00.642655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.116 [2024-11-21 04:55:00.642888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:44.116 [2024-11-21 04:55:00.642948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.116 [2024-11-21 04:55:00.643328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:44.116 [2024-11-21 04:55:00.643554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:44.116 [2024-11-21 04:55:00.643607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:44.116 [2024-11-21 04:55:00.643903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.116 "name": "raid_bdev1", 00:08:44.116 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:44.116 "strip_size_kb": 64, 00:08:44.116 "state": "online", 00:08:44.116 "raid_level": "concat", 00:08:44.116 "superblock": true, 00:08:44.116 "num_base_bdevs": 3, 00:08:44.116 "num_base_bdevs_discovered": 3, 00:08:44.116 "num_base_bdevs_operational": 3, 00:08:44.116 "base_bdevs_list": [ 00:08:44.116 { 00:08:44.116 "name": "pt1", 00:08:44.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.116 "is_configured": true, 00:08:44.116 "data_offset": 2048, 00:08:44.116 "data_size": 63488 00:08:44.116 }, 00:08:44.116 { 00:08:44.116 "name": "pt2", 00:08:44.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.116 "is_configured": true, 00:08:44.116 "data_offset": 2048, 00:08:44.116 "data_size": 63488 00:08:44.116 }, 00:08:44.116 { 00:08:44.116 "name": "pt3", 00:08:44.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.116 "is_configured": true, 00:08:44.116 "data_offset": 2048, 00:08:44.116 "data_size": 63488 00:08:44.116 } 00:08:44.116 ] 00:08:44.116 }' 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.116 04:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.376 [2024-11-21 04:55:01.071972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.376 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.376 "name": "raid_bdev1", 00:08:44.376 "aliases": [ 00:08:44.376 "bc0bf97b-760e-4143-a992-0efb84c386c4" 00:08:44.376 ], 00:08:44.376 "product_name": "Raid Volume", 00:08:44.376 "block_size": 512, 00:08:44.376 "num_blocks": 190464, 00:08:44.376 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:44.376 "assigned_rate_limits": { 00:08:44.376 "rw_ios_per_sec": 0, 00:08:44.376 "rw_mbytes_per_sec": 0, 00:08:44.376 "r_mbytes_per_sec": 0, 00:08:44.376 "w_mbytes_per_sec": 0 00:08:44.376 }, 00:08:44.376 "claimed": false, 00:08:44.376 "zoned": false, 00:08:44.376 "supported_io_types": { 00:08:44.376 "read": true, 00:08:44.376 "write": true, 00:08:44.376 "unmap": true, 00:08:44.376 "flush": true, 00:08:44.376 "reset": true, 00:08:44.376 "nvme_admin": false, 00:08:44.376 "nvme_io": false, 00:08:44.376 "nvme_io_md": false, 00:08:44.376 "write_zeroes": true, 00:08:44.376 "zcopy": false, 00:08:44.376 "get_zone_info": false, 00:08:44.376 "zone_management": false, 00:08:44.376 "zone_append": false, 00:08:44.376 "compare": false, 00:08:44.376 "compare_and_write": false, 00:08:44.376 "abort": false, 00:08:44.376 "seek_hole": false, 00:08:44.376 "seek_data": false, 00:08:44.376 "copy": false, 00:08:44.376 "nvme_iov_md": false 00:08:44.376 }, 00:08:44.376 "memory_domains": [ 00:08:44.376 { 00:08:44.376 "dma_device_id": "system", 00:08:44.376 "dma_device_type": 1 00:08:44.376 }, 00:08:44.376 { 00:08:44.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.376 "dma_device_type": 2 00:08:44.376 }, 00:08:44.376 { 00:08:44.376 "dma_device_id": "system", 00:08:44.376 "dma_device_type": 1 00:08:44.376 }, 00:08:44.376 { 00:08:44.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.376 "dma_device_type": 2 00:08:44.376 }, 00:08:44.376 { 00:08:44.376 "dma_device_id": "system", 00:08:44.376 "dma_device_type": 1 00:08:44.376 }, 00:08:44.376 { 00:08:44.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.376 "dma_device_type": 2 00:08:44.376 } 00:08:44.376 ], 00:08:44.376 "driver_specific": { 00:08:44.377 "raid": { 00:08:44.377 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:44.377 "strip_size_kb": 64, 00:08:44.377 "state": "online", 00:08:44.377 "raid_level": "concat", 00:08:44.377 "superblock": true, 00:08:44.377 "num_base_bdevs": 3, 00:08:44.377 "num_base_bdevs_discovered": 3, 00:08:44.377 "num_base_bdevs_operational": 3, 00:08:44.377 "base_bdevs_list": [ 00:08:44.377 { 00:08:44.377 "name": "pt1", 00:08:44.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.377 "is_configured": true, 00:08:44.377 "data_offset": 2048, 00:08:44.377 "data_size": 63488 00:08:44.377 }, 00:08:44.377 { 00:08:44.377 "name": "pt2", 00:08:44.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.377 "is_configured": true, 00:08:44.377 "data_offset": 2048, 00:08:44.377 "data_size": 63488 00:08:44.377 }, 00:08:44.377 { 00:08:44.377 "name": "pt3", 00:08:44.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.377 "is_configured": true, 00:08:44.377 "data_offset": 2048, 00:08:44.377 "data_size": 63488 00:08:44.377 } 00:08:44.377 ] 00:08:44.377 } 00:08:44.377 } 00:08:44.377 }' 00:08:44.377 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.637 pt2 00:08:44.637 pt3' 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.637 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 [2024-11-21 04:55:01.327539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc0bf97b-760e-4143-a992-0efb84c386c4 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc0bf97b-760e-4143-a992-0efb84c386c4 ']' 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 [2024-11-21 04:55:01.355168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.638 [2024-11-21 04:55:01.355202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.638 [2024-11-21 04:55:01.355320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.638 [2024-11-21 04:55:01.355383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.638 [2024-11-21 04:55:01.355398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 [2024-11-21 04:55:01.510860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.898 [2024-11-21 04:55:01.512815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.898 [2024-11-21 04:55:01.512864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:44.898 [2024-11-21 04:55:01.512911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:44.898 [2024-11-21 04:55:01.512958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:44.898 [2024-11-21 04:55:01.512977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:44.898 [2024-11-21 04:55:01.512990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.898 [2024-11-21 04:55:01.512999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:44.898 request: 00:08:44.898 { 00:08:44.898 "name": "raid_bdev1", 00:08:44.898 "raid_level": "concat", 00:08:44.898 "base_bdevs": [ 00:08:44.898 "malloc1", 00:08:44.898 "malloc2", 00:08:44.898 "malloc3" 00:08:44.898 ], 00:08:44.898 "strip_size_kb": 64, 00:08:44.898 "superblock": false, 00:08:44.898 "method": "bdev_raid_create", 00:08:44.898 "req_id": 1 00:08:44.898 } 00:08:44.898 Got JSON-RPC error response 00:08:44.898 response: 00:08:44.898 { 00:08:44.898 "code": -17, 00:08:44.898 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.898 } 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.898 [2024-11-21 04:55:01.578702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.898 [2024-11-21 04:55:01.578789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.898 [2024-11-21 04:55:01.578821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:44.898 [2024-11-21 04:55:01.578850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.898 [2024-11-21 04:55:01.580998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.898 [2024-11-21 04:55:01.581068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.898 [2024-11-21 04:55:01.581166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.898 [2024-11-21 04:55:01.581236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.898 pt1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.898 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.899 "name": "raid_bdev1", 00:08:44.899 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:44.899 "strip_size_kb": 64, 00:08:44.899 "state": "configuring", 00:08:44.899 "raid_level": "concat", 00:08:44.899 "superblock": true, 00:08:44.899 "num_base_bdevs": 3, 00:08:44.899 "num_base_bdevs_discovered": 1, 00:08:44.899 "num_base_bdevs_operational": 3, 00:08:44.899 "base_bdevs_list": [ 00:08:44.899 { 00:08:44.899 "name": "pt1", 00:08:44.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.899 "is_configured": true, 00:08:44.899 "data_offset": 2048, 00:08:44.899 "data_size": 63488 00:08:44.899 }, 00:08:44.899 { 00:08:44.899 "name": null, 00:08:44.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.899 "is_configured": false, 00:08:44.899 "data_offset": 2048, 00:08:44.899 "data_size": 63488 00:08:44.899 }, 00:08:44.899 { 00:08:44.899 "name": null, 00:08:44.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.899 "is_configured": false, 00:08:44.899 "data_offset": 2048, 00:08:44.899 "data_size": 63488 00:08:44.899 } 00:08:44.899 ] 00:08:44.899 }' 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.899 04:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.468 [2024-11-21 04:55:02.013990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.468 [2024-11-21 04:55:02.014107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.468 [2024-11-21 04:55:02.014145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:45.468 [2024-11-21 04:55:02.014177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.468 [2024-11-21 04:55:02.014629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.468 [2024-11-21 04:55:02.014686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.468 [2024-11-21 04:55:02.014805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.468 [2024-11-21 04:55:02.014860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.468 pt2 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.468 [2024-11-21 04:55:02.025950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.468 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.469 "name": "raid_bdev1", 00:08:45.469 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:45.469 "strip_size_kb": 64, 00:08:45.469 "state": "configuring", 00:08:45.469 "raid_level": "concat", 00:08:45.469 "superblock": true, 00:08:45.469 "num_base_bdevs": 3, 00:08:45.469 "num_base_bdevs_discovered": 1, 00:08:45.469 "num_base_bdevs_operational": 3, 00:08:45.469 "base_bdevs_list": [ 00:08:45.469 { 00:08:45.469 "name": "pt1", 00:08:45.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.469 "is_configured": true, 00:08:45.469 "data_offset": 2048, 00:08:45.469 "data_size": 63488 00:08:45.469 }, 00:08:45.469 { 00:08:45.469 "name": null, 00:08:45.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.469 "is_configured": false, 00:08:45.469 "data_offset": 0, 00:08:45.469 "data_size": 63488 00:08:45.469 }, 00:08:45.469 { 00:08:45.469 "name": null, 00:08:45.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.469 "is_configured": false, 00:08:45.469 "data_offset": 2048, 00:08:45.469 "data_size": 63488 00:08:45.469 } 00:08:45.469 ] 00:08:45.469 }' 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.469 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.728 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.728 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.729 [2024-11-21 04:55:02.453199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.729 [2024-11-21 04:55:02.453295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.729 [2024-11-21 04:55:02.453319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:45.729 [2024-11-21 04:55:02.453327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.729 [2024-11-21 04:55:02.453684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.729 [2024-11-21 04:55:02.453700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.729 [2024-11-21 04:55:02.453760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.729 [2024-11-21 04:55:02.453776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.729 pt2 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.729 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.988 [2024-11-21 04:55:02.465174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:45.989 [2024-11-21 04:55:02.465215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.989 [2024-11-21 04:55:02.465248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:45.989 [2024-11-21 04:55:02.465256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.989 [2024-11-21 04:55:02.465566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.989 [2024-11-21 04:55:02.465582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:45.989 [2024-11-21 04:55:02.465632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:45.989 [2024-11-21 04:55:02.465648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:45.989 [2024-11-21 04:55:02.465739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:45.989 [2024-11-21 04:55:02.465748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.989 [2024-11-21 04:55:02.465973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:45.989 [2024-11-21 04:55:02.466072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:45.989 [2024-11-21 04:55:02.466082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:45.989 [2024-11-21 04:55:02.466183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.989 pt3 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.989 "name": "raid_bdev1", 00:08:45.989 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:45.989 "strip_size_kb": 64, 00:08:45.989 "state": "online", 00:08:45.989 "raid_level": "concat", 00:08:45.989 "superblock": true, 00:08:45.989 "num_base_bdevs": 3, 00:08:45.989 "num_base_bdevs_discovered": 3, 00:08:45.989 "num_base_bdevs_operational": 3, 00:08:45.989 "base_bdevs_list": [ 00:08:45.989 { 00:08:45.989 "name": "pt1", 00:08:45.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.989 "is_configured": true, 00:08:45.989 "data_offset": 2048, 00:08:45.989 "data_size": 63488 00:08:45.989 }, 00:08:45.989 { 00:08:45.989 "name": "pt2", 00:08:45.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.989 "is_configured": true, 00:08:45.989 "data_offset": 2048, 00:08:45.989 "data_size": 63488 00:08:45.989 }, 00:08:45.989 { 00:08:45.989 "name": "pt3", 00:08:45.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.989 "is_configured": true, 00:08:45.989 "data_offset": 2048, 00:08:45.989 "data_size": 63488 00:08:45.989 } 00:08:45.989 ] 00:08:45.989 }' 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.989 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.248 [2024-11-21 04:55:02.952676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.248 04:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.507 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.507 "name": "raid_bdev1", 00:08:46.507 "aliases": [ 00:08:46.507 "bc0bf97b-760e-4143-a992-0efb84c386c4" 00:08:46.507 ], 00:08:46.507 "product_name": "Raid Volume", 00:08:46.507 "block_size": 512, 00:08:46.507 "num_blocks": 190464, 00:08:46.507 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:46.507 "assigned_rate_limits": { 00:08:46.507 "rw_ios_per_sec": 0, 00:08:46.507 "rw_mbytes_per_sec": 0, 00:08:46.507 "r_mbytes_per_sec": 0, 00:08:46.507 "w_mbytes_per_sec": 0 00:08:46.507 }, 00:08:46.507 "claimed": false, 00:08:46.507 "zoned": false, 00:08:46.507 "supported_io_types": { 00:08:46.507 "read": true, 00:08:46.507 "write": true, 00:08:46.507 "unmap": true, 00:08:46.507 "flush": true, 00:08:46.507 "reset": true, 00:08:46.507 "nvme_admin": false, 00:08:46.507 "nvme_io": false, 00:08:46.507 "nvme_io_md": false, 00:08:46.507 "write_zeroes": true, 00:08:46.507 "zcopy": false, 00:08:46.507 "get_zone_info": false, 00:08:46.507 "zone_management": false, 00:08:46.507 "zone_append": false, 00:08:46.507 "compare": false, 00:08:46.507 "compare_and_write": false, 00:08:46.507 "abort": false, 00:08:46.507 "seek_hole": false, 00:08:46.507 "seek_data": false, 00:08:46.507 "copy": false, 00:08:46.507 "nvme_iov_md": false 00:08:46.507 }, 00:08:46.507 "memory_domains": [ 00:08:46.507 { 00:08:46.507 "dma_device_id": "system", 00:08:46.507 "dma_device_type": 1 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.507 "dma_device_type": 2 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "dma_device_id": "system", 00:08:46.507 "dma_device_type": 1 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.507 "dma_device_type": 2 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "dma_device_id": "system", 00:08:46.507 "dma_device_type": 1 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.507 "dma_device_type": 2 00:08:46.507 } 00:08:46.507 ], 00:08:46.507 "driver_specific": { 00:08:46.507 "raid": { 00:08:46.507 "uuid": "bc0bf97b-760e-4143-a992-0efb84c386c4", 00:08:46.507 "strip_size_kb": 64, 00:08:46.507 "state": "online", 00:08:46.507 "raid_level": "concat", 00:08:46.507 "superblock": true, 00:08:46.507 "num_base_bdevs": 3, 00:08:46.507 "num_base_bdevs_discovered": 3, 00:08:46.507 "num_base_bdevs_operational": 3, 00:08:46.507 "base_bdevs_list": [ 00:08:46.507 { 00:08:46.507 "name": "pt1", 00:08:46.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.507 "is_configured": true, 00:08:46.507 "data_offset": 2048, 00:08:46.507 "data_size": 63488 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "name": "pt2", 00:08:46.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.507 "is_configured": true, 00:08:46.507 "data_offset": 2048, 00:08:46.507 "data_size": 63488 00:08:46.507 }, 00:08:46.507 { 00:08:46.507 "name": "pt3", 00:08:46.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.508 "is_configured": true, 00:08:46.508 "data_offset": 2048, 00:08:46.508 "data_size": 63488 00:08:46.508 } 00:08:46.508 ] 00:08:46.508 } 00:08:46.508 } 00:08:46.508 }' 00:08:46.508 04:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:46.508 pt2 00:08:46.508 pt3' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.508 [2024-11-21 04:55:03.176229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc0bf97b-760e-4143-a992-0efb84c386c4 '!=' bc0bf97b-760e-4143-a992-0efb84c386c4 ']' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78119 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78119 ']' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78119 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.508 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78119 00:08:46.768 killing process with pid 78119 00:08:46.768 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.768 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.768 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78119' 00:08:46.768 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 78119 00:08:46.768 [2024-11-21 04:55:03.255746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.768 [2024-11-21 04:55:03.255838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.768 [2024-11-21 04:55:03.255897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.768 [2024-11-21 04:55:03.255906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:46.768 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 78119 00:08:46.768 [2024-11-21 04:55:03.289145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.029 04:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.029 00:08:47.029 real 0m3.885s 00:08:47.029 user 0m6.098s 00:08:47.029 sys 0m0.876s 00:08:47.029 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.029 04:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.029 ************************************ 00:08:47.029 END TEST raid_superblock_test 00:08:47.029 ************************************ 00:08:47.029 04:55:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:47.029 04:55:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.029 04:55:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.029 04:55:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.029 ************************************ 00:08:47.029 START TEST raid_read_error_test 00:08:47.029 ************************************ 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6ox8lwr6vn 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78350 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78350 00:08:47.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78350 ']' 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.029 04:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.029 [2024-11-21 04:55:03.678875] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:47.030 [2024-11-21 04:55:03.679007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78350 ] 00:08:47.290 [2024-11-21 04:55:03.849450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.290 [2024-11-21 04:55:03.877447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.290 [2024-11-21 04:55:03.920083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.290 [2024-11-21 04:55:03.920141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 BaseBdev1_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 true 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 [2024-11-21 04:55:04.538131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.860 [2024-11-21 04:55:04.538188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.860 [2024-11-21 04:55:04.538207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.860 [2024-11-21 04:55:04.538216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.860 [2024-11-21 04:55:04.540326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.860 [2024-11-21 04:55:04.540364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.860 BaseBdev1 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 BaseBdev2_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 true 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 [2024-11-21 04:55:04.578328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.860 [2024-11-21 04:55:04.578374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.860 [2024-11-21 04:55:04.578407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.860 [2024-11-21 04:55:04.578415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.860 [2024-11-21 04:55:04.580449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.860 [2024-11-21 04:55:04.580488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.860 BaseBdev2 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.120 BaseBdev3_malloc 00:08:48.120 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.120 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.120 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.120 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.120 true 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.121 [2024-11-21 04:55:04.618727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.121 [2024-11-21 04:55:04.618772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.121 [2024-11-21 04:55:04.618789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.121 [2024-11-21 04:55:04.618798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.121 [2024-11-21 04:55:04.620911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.121 [2024-11-21 04:55:04.620993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.121 BaseBdev3 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.121 [2024-11-21 04:55:04.630754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.121 [2024-11-21 04:55:04.632596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.121 [2024-11-21 04:55:04.632668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.121 [2024-11-21 04:55:04.632836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:48.121 [2024-11-21 04:55:04.632852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.121 [2024-11-21 04:55:04.633111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.121 [2024-11-21 04:55:04.633252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:48.121 [2024-11-21 04:55:04.633263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:48.121 [2024-11-21 04:55:04.633426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.121 "name": "raid_bdev1", 00:08:48.121 "uuid": "c859f4f3-96fb-4a6b-a055-ec2e0e21d9f4", 00:08:48.121 "strip_size_kb": 64, 00:08:48.121 "state": "online", 00:08:48.121 "raid_level": "concat", 00:08:48.121 "superblock": true, 00:08:48.121 "num_base_bdevs": 3, 00:08:48.121 "num_base_bdevs_discovered": 3, 00:08:48.121 "num_base_bdevs_operational": 3, 00:08:48.121 "base_bdevs_list": [ 00:08:48.121 { 00:08:48.121 "name": "BaseBdev1", 00:08:48.121 "uuid": "8893842e-c153-5d2c-968e-1c82fd1e3282", 00:08:48.121 "is_configured": true, 00:08:48.121 "data_offset": 2048, 00:08:48.121 "data_size": 63488 00:08:48.121 }, 00:08:48.121 { 00:08:48.121 "name": "BaseBdev2", 00:08:48.121 "uuid": "9148f763-182a-51d1-b682-903eb2da8fb3", 00:08:48.121 "is_configured": true, 00:08:48.121 "data_offset": 2048, 00:08:48.121 "data_size": 63488 00:08:48.121 }, 00:08:48.121 { 00:08:48.121 "name": "BaseBdev3", 00:08:48.121 "uuid": "a36ae4e0-34f0-5095-8d6e-0ba915735b8f", 00:08:48.121 "is_configured": true, 00:08:48.121 "data_offset": 2048, 00:08:48.121 "data_size": 63488 00:08:48.121 } 00:08:48.121 ] 00:08:48.121 }' 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.121 04:55:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.411 04:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.411 04:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.700 [2024-11-21 04:55:05.182178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.640 "name": "raid_bdev1", 00:08:49.640 "uuid": "c859f4f3-96fb-4a6b-a055-ec2e0e21d9f4", 00:08:49.640 "strip_size_kb": 64, 00:08:49.640 "state": "online", 00:08:49.640 "raid_level": "concat", 00:08:49.640 "superblock": true, 00:08:49.640 "num_base_bdevs": 3, 00:08:49.640 "num_base_bdevs_discovered": 3, 00:08:49.640 "num_base_bdevs_operational": 3, 00:08:49.640 "base_bdevs_list": [ 00:08:49.640 { 00:08:49.640 "name": "BaseBdev1", 00:08:49.640 "uuid": "8893842e-c153-5d2c-968e-1c82fd1e3282", 00:08:49.640 "is_configured": true, 00:08:49.640 "data_offset": 2048, 00:08:49.640 "data_size": 63488 00:08:49.640 }, 00:08:49.640 { 00:08:49.640 "name": "BaseBdev2", 00:08:49.640 "uuid": "9148f763-182a-51d1-b682-903eb2da8fb3", 00:08:49.640 "is_configured": true, 00:08:49.640 "data_offset": 2048, 00:08:49.640 "data_size": 63488 00:08:49.640 }, 00:08:49.640 { 00:08:49.640 "name": "BaseBdev3", 00:08:49.640 "uuid": "a36ae4e0-34f0-5095-8d6e-0ba915735b8f", 00:08:49.640 "is_configured": true, 00:08:49.640 "data_offset": 2048, 00:08:49.640 "data_size": 63488 00:08:49.640 } 00:08:49.640 ] 00:08:49.640 }' 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.640 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.900 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.900 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.900 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.900 [2024-11-21 04:55:06.525634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.900 [2024-11-21 04:55:06.525674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.900 [2024-11-21 04:55:06.528536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.900 { 00:08:49.900 "results": [ 00:08:49.900 { 00:08:49.900 "job": "raid_bdev1", 00:08:49.900 "core_mask": "0x1", 00:08:49.900 "workload": "randrw", 00:08:49.900 "percentage": 50, 00:08:49.900 "status": "finished", 00:08:49.900 "queue_depth": 1, 00:08:49.900 "io_size": 131072, 00:08:49.900 "runtime": 1.344169, 00:08:49.900 "iops": 16931.650707611916, 00:08:49.900 "mibps": 2116.4563384514895, 00:08:49.900 "io_failed": 1, 00:08:49.900 "io_timeout": 0, 00:08:49.900 "avg_latency_us": 81.79453051012656, 00:08:49.900 "min_latency_us": 24.258515283842794, 00:08:49.900 "max_latency_us": 1387.989519650655 00:08:49.900 } 00:08:49.900 ], 00:08:49.900 "core_count": 1 00:08:49.900 } 00:08:49.900 [2024-11-21 04:55:06.528672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.900 [2024-11-21 04:55:06.528718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.900 [2024-11-21 04:55:06.528736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:49.900 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.900 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78350 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78350 ']' 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78350 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78350 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78350' 00:08:49.901 killing process with pid 78350 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78350 00:08:49.901 [2024-11-21 04:55:06.564799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.901 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78350 00:08:49.901 [2024-11-21 04:55:06.590572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6ox8lwr6vn 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:50.161 00:08:50.161 real 0m3.228s 00:08:50.161 user 0m4.096s 00:08:50.161 sys 0m0.521s 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.161 ************************************ 00:08:50.161 END TEST raid_read_error_test 00:08:50.161 ************************************ 00:08:50.161 04:55:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.161 04:55:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:50.161 04:55:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.161 04:55:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.161 04:55:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.161 ************************************ 00:08:50.161 START TEST raid_write_error_test 00:08:50.161 ************************************ 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:50.161 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gaU1DY7q32 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78487 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78487 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78487 ']' 00:08:50.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.421 04:55:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 [2024-11-21 04:55:06.989146] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:50.421 [2024-11-21 04:55:06.989295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78487 ] 00:08:50.681 [2024-11-21 04:55:07.165696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.681 [2024-11-21 04:55:07.195079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.681 [2024-11-21 04:55:07.237790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.681 [2024-11-21 04:55:07.237829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 BaseBdev1_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 true 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 [2024-11-21 04:55:07.840541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.252 [2024-11-21 04:55:07.840610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.252 [2024-11-21 04:55:07.840636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.252 [2024-11-21 04:55:07.840646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.252 [2024-11-21 04:55:07.842993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.252 [2024-11-21 04:55:07.843031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.252 BaseBdev1 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 BaseBdev2_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 true 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 [2024-11-21 04:55:07.881486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.252 [2024-11-21 04:55:07.881622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.252 [2024-11-21 04:55:07.881648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.252 [2024-11-21 04:55:07.881657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.252 [2024-11-21 04:55:07.883891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.252 [2024-11-21 04:55:07.883932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.252 BaseBdev2 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 BaseBdev3_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 true 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 [2024-11-21 04:55:07.922296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:51.252 [2024-11-21 04:55:07.922352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.252 [2024-11-21 04:55:07.922373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:51.252 [2024-11-21 04:55:07.922398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.252 [2024-11-21 04:55:07.924623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.252 [2024-11-21 04:55:07.924658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:51.252 BaseBdev3 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.252 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.252 [2024-11-21 04:55:07.934337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.252 [2024-11-21 04:55:07.936353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.252 [2024-11-21 04:55:07.936434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.253 [2024-11-21 04:55:07.936617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:51.253 [2024-11-21 04:55:07.936631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.253 [2024-11-21 04:55:07.936912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:51.253 [2024-11-21 04:55:07.937053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:51.253 [2024-11-21 04:55:07.937063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:51.253 [2024-11-21 04:55:07.937250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.253 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.512 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.512 "name": "raid_bdev1", 00:08:51.512 "uuid": "8b621fa6-16b8-46cd-aeac-40005a0b62b8", 00:08:51.512 "strip_size_kb": 64, 00:08:51.512 "state": "online", 00:08:51.512 "raid_level": "concat", 00:08:51.512 "superblock": true, 00:08:51.512 "num_base_bdevs": 3, 00:08:51.512 "num_base_bdevs_discovered": 3, 00:08:51.512 "num_base_bdevs_operational": 3, 00:08:51.513 "base_bdevs_list": [ 00:08:51.513 { 00:08:51.513 "name": "BaseBdev1", 00:08:51.513 "uuid": "8ac4b845-4ead-5d3a-a666-e81f0021625b", 00:08:51.513 "is_configured": true, 00:08:51.513 "data_offset": 2048, 00:08:51.513 "data_size": 63488 00:08:51.513 }, 00:08:51.513 { 00:08:51.513 "name": "BaseBdev2", 00:08:51.513 "uuid": "d4fde194-f7f3-5a69-8383-12dbd2757155", 00:08:51.513 "is_configured": true, 00:08:51.513 "data_offset": 2048, 00:08:51.513 "data_size": 63488 00:08:51.513 }, 00:08:51.513 { 00:08:51.513 "name": "BaseBdev3", 00:08:51.513 "uuid": "4e7fb848-f5fd-5771-bf99-15abd3f09323", 00:08:51.513 "is_configured": true, 00:08:51.513 "data_offset": 2048, 00:08:51.513 "data_size": 63488 00:08:51.513 } 00:08:51.513 ] 00:08:51.513 }' 00:08:51.513 04:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.513 04:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 04:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:51.772 04:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.772 [2024-11-21 04:55:08.473802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.711 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.712 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.712 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.971 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.971 "name": "raid_bdev1", 00:08:52.971 "uuid": "8b621fa6-16b8-46cd-aeac-40005a0b62b8", 00:08:52.971 "strip_size_kb": 64, 00:08:52.971 "state": "online", 00:08:52.971 "raid_level": "concat", 00:08:52.971 "superblock": true, 00:08:52.971 "num_base_bdevs": 3, 00:08:52.971 "num_base_bdevs_discovered": 3, 00:08:52.971 "num_base_bdevs_operational": 3, 00:08:52.971 "base_bdevs_list": [ 00:08:52.971 { 00:08:52.971 "name": "BaseBdev1", 00:08:52.971 "uuid": "8ac4b845-4ead-5d3a-a666-e81f0021625b", 00:08:52.971 "is_configured": true, 00:08:52.971 "data_offset": 2048, 00:08:52.971 "data_size": 63488 00:08:52.971 }, 00:08:52.971 { 00:08:52.971 "name": "BaseBdev2", 00:08:52.971 "uuid": "d4fde194-f7f3-5a69-8383-12dbd2757155", 00:08:52.971 "is_configured": true, 00:08:52.971 "data_offset": 2048, 00:08:52.971 "data_size": 63488 00:08:52.971 }, 00:08:52.971 { 00:08:52.971 "name": "BaseBdev3", 00:08:52.971 "uuid": "4e7fb848-f5fd-5771-bf99-15abd3f09323", 00:08:52.971 "is_configured": true, 00:08:52.971 "data_offset": 2048, 00:08:52.971 "data_size": 63488 00:08:52.971 } 00:08:52.971 ] 00:08:52.971 }' 00:08:52.971 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.971 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.232 [2024-11-21 04:55:09.857387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.232 [2024-11-21 04:55:09.857481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.232 [2024-11-21 04:55:09.859984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.232 [2024-11-21 04:55:09.860073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.232 [2024-11-21 04:55:09.860140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.232 [2024-11-21 04:55:09.860214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:53.232 { 00:08:53.232 "results": [ 00:08:53.232 { 00:08:53.232 "job": "raid_bdev1", 00:08:53.232 "core_mask": "0x1", 00:08:53.232 "workload": "randrw", 00:08:53.232 "percentage": 50, 00:08:53.232 "status": "finished", 00:08:53.232 "queue_depth": 1, 00:08:53.232 "io_size": 131072, 00:08:53.232 "runtime": 1.384538, 00:08:53.232 "iops": 17184.07150977438, 00:08:53.232 "mibps": 2148.0089387217977, 00:08:53.232 "io_failed": 1, 00:08:53.232 "io_timeout": 0, 00:08:53.232 "avg_latency_us": 80.64589867813677, 00:08:53.232 "min_latency_us": 24.593886462882097, 00:08:53.232 "max_latency_us": 1323.598253275109 00:08:53.232 } 00:08:53.232 ], 00:08:53.232 "core_count": 1 00:08:53.232 } 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78487 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78487 ']' 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78487 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78487 00:08:53.232 killing process with pid 78487 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78487' 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78487 00:08:53.232 [2024-11-21 04:55:09.908275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.232 04:55:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78487 00:08:53.232 [2024-11-21 04:55:09.933408] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gaU1DY7q32 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.492 ************************************ 00:08:53.492 END TEST raid_write_error_test 00:08:53.492 ************************************ 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:53.492 00:08:53.492 real 0m3.271s 00:08:53.492 user 0m4.141s 00:08:53.492 sys 0m0.549s 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.492 04:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.492 04:55:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:53.492 04:55:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:53.492 04:55:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.492 04:55:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.492 04:55:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.492 ************************************ 00:08:53.492 START TEST raid_state_function_test 00:08:53.492 ************************************ 00:08:53.492 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:53.492 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:53.492 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.492 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:53.492 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78614 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78614' 00:08:53.752 Process raid pid: 78614 00:08:53.752 04:55:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78614 00:08:53.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78614 ']' 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.753 04:55:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 [2024-11-21 04:55:10.313015] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:53.753 [2024-11-21 04:55:10.313212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.753 [2024-11-21 04:55:10.463724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.013 [2024-11-21 04:55:10.488899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.013 [2024-11-21 04:55:10.530381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.013 [2024-11-21 04:55:10.530499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.583 [2024-11-21 04:55:11.143080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.583 [2024-11-21 04:55:11.143145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.583 [2024-11-21 04:55:11.143155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.583 [2024-11-21 04:55:11.143165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.583 [2024-11-21 04:55:11.143171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.583 [2024-11-21 04:55:11.143181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.583 "name": "Existed_Raid", 00:08:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.583 "strip_size_kb": 0, 00:08:54.583 "state": "configuring", 00:08:54.583 "raid_level": "raid1", 00:08:54.583 "superblock": false, 00:08:54.583 "num_base_bdevs": 3, 00:08:54.583 "num_base_bdevs_discovered": 0, 00:08:54.583 "num_base_bdevs_operational": 3, 00:08:54.583 "base_bdevs_list": [ 00:08:54.583 { 00:08:54.583 "name": "BaseBdev1", 00:08:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.583 "is_configured": false, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 0 00:08:54.583 }, 00:08:54.583 { 00:08:54.583 "name": "BaseBdev2", 00:08:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.583 "is_configured": false, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 0 00:08:54.583 }, 00:08:54.583 { 00:08:54.583 "name": "BaseBdev3", 00:08:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.583 "is_configured": false, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 0 00:08:54.583 } 00:08:54.583 ] 00:08:54.583 }' 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.583 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.851 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.851 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.851 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.125 [2024-11-21 04:55:11.582234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.125 [2024-11-21 04:55:11.582316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.126 [2024-11-21 04:55:11.594212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.126 [2024-11-21 04:55:11.594286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.126 [2024-11-21 04:55:11.594313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.126 [2024-11-21 04:55:11.594335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.126 [2024-11-21 04:55:11.594353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.126 [2024-11-21 04:55:11.594373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.126 [2024-11-21 04:55:11.614812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.126 BaseBdev1 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.126 [ 00:08:55.126 { 00:08:55.126 "name": "BaseBdev1", 00:08:55.126 "aliases": [ 00:08:55.126 "f6c8b160-b526-40aa-a726-8d9a429ce3ef" 00:08:55.126 ], 00:08:55.126 "product_name": "Malloc disk", 00:08:55.126 "block_size": 512, 00:08:55.126 "num_blocks": 65536, 00:08:55.126 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:55.126 "assigned_rate_limits": { 00:08:55.126 "rw_ios_per_sec": 0, 00:08:55.126 "rw_mbytes_per_sec": 0, 00:08:55.126 "r_mbytes_per_sec": 0, 00:08:55.126 "w_mbytes_per_sec": 0 00:08:55.126 }, 00:08:55.126 "claimed": true, 00:08:55.126 "claim_type": "exclusive_write", 00:08:55.126 "zoned": false, 00:08:55.126 "supported_io_types": { 00:08:55.126 "read": true, 00:08:55.126 "write": true, 00:08:55.126 "unmap": true, 00:08:55.126 "flush": true, 00:08:55.126 "reset": true, 00:08:55.126 "nvme_admin": false, 00:08:55.126 "nvme_io": false, 00:08:55.126 "nvme_io_md": false, 00:08:55.126 "write_zeroes": true, 00:08:55.126 "zcopy": true, 00:08:55.126 "get_zone_info": false, 00:08:55.126 "zone_management": false, 00:08:55.126 "zone_append": false, 00:08:55.126 "compare": false, 00:08:55.126 "compare_and_write": false, 00:08:55.126 "abort": true, 00:08:55.126 "seek_hole": false, 00:08:55.126 "seek_data": false, 00:08:55.126 "copy": true, 00:08:55.126 "nvme_iov_md": false 00:08:55.126 }, 00:08:55.126 "memory_domains": [ 00:08:55.126 { 00:08:55.126 "dma_device_id": "system", 00:08:55.126 "dma_device_type": 1 00:08:55.126 }, 00:08:55.126 { 00:08:55.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.126 "dma_device_type": 2 00:08:55.126 } 00:08:55.126 ], 00:08:55.126 "driver_specific": {} 00:08:55.126 } 00:08:55.126 ] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.126 "name": "Existed_Raid", 00:08:55.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.126 "strip_size_kb": 0, 00:08:55.126 "state": "configuring", 00:08:55.126 "raid_level": "raid1", 00:08:55.126 "superblock": false, 00:08:55.126 "num_base_bdevs": 3, 00:08:55.126 "num_base_bdevs_discovered": 1, 00:08:55.126 "num_base_bdevs_operational": 3, 00:08:55.126 "base_bdevs_list": [ 00:08:55.126 { 00:08:55.126 "name": "BaseBdev1", 00:08:55.126 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:55.126 "is_configured": true, 00:08:55.126 "data_offset": 0, 00:08:55.126 "data_size": 65536 00:08:55.126 }, 00:08:55.126 { 00:08:55.126 "name": "BaseBdev2", 00:08:55.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.126 "is_configured": false, 00:08:55.126 "data_offset": 0, 00:08:55.126 "data_size": 0 00:08:55.126 }, 00:08:55.126 { 00:08:55.126 "name": "BaseBdev3", 00:08:55.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.126 "is_configured": false, 00:08:55.126 "data_offset": 0, 00:08:55.126 "data_size": 0 00:08:55.126 } 00:08:55.126 ] 00:08:55.126 }' 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.126 04:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.386 [2024-11-21 04:55:12.082034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.386 [2024-11-21 04:55:12.082077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.386 [2024-11-21 04:55:12.094040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.386 [2024-11-21 04:55:12.095933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.386 [2024-11-21 04:55:12.096010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.386 [2024-11-21 04:55:12.096056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.386 [2024-11-21 04:55:12.096110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.386 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.645 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.645 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.645 "name": "Existed_Raid", 00:08:55.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.645 "strip_size_kb": 0, 00:08:55.645 "state": "configuring", 00:08:55.645 "raid_level": "raid1", 00:08:55.645 "superblock": false, 00:08:55.645 "num_base_bdevs": 3, 00:08:55.645 "num_base_bdevs_discovered": 1, 00:08:55.645 "num_base_bdevs_operational": 3, 00:08:55.645 "base_bdevs_list": [ 00:08:55.645 { 00:08:55.645 "name": "BaseBdev1", 00:08:55.645 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:55.645 "is_configured": true, 00:08:55.645 "data_offset": 0, 00:08:55.645 "data_size": 65536 00:08:55.645 }, 00:08:55.645 { 00:08:55.645 "name": "BaseBdev2", 00:08:55.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.645 "is_configured": false, 00:08:55.645 "data_offset": 0, 00:08:55.645 "data_size": 0 00:08:55.645 }, 00:08:55.645 { 00:08:55.645 "name": "BaseBdev3", 00:08:55.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.645 "is_configured": false, 00:08:55.645 "data_offset": 0, 00:08:55.645 "data_size": 0 00:08:55.645 } 00:08:55.645 ] 00:08:55.645 }' 00:08:55.645 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.645 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.905 [2024-11-21 04:55:12.488243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.905 BaseBdev2 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.905 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.905 [ 00:08:55.905 { 00:08:55.905 "name": "BaseBdev2", 00:08:55.905 "aliases": [ 00:08:55.905 "b658fe42-3bf1-431f-8f3c-2b6be3e0f961" 00:08:55.905 ], 00:08:55.905 "product_name": "Malloc disk", 00:08:55.905 "block_size": 512, 00:08:55.905 "num_blocks": 65536, 00:08:55.905 "uuid": "b658fe42-3bf1-431f-8f3c-2b6be3e0f961", 00:08:55.905 "assigned_rate_limits": { 00:08:55.905 "rw_ios_per_sec": 0, 00:08:55.905 "rw_mbytes_per_sec": 0, 00:08:55.905 "r_mbytes_per_sec": 0, 00:08:55.905 "w_mbytes_per_sec": 0 00:08:55.905 }, 00:08:55.905 "claimed": true, 00:08:55.905 "claim_type": "exclusive_write", 00:08:55.905 "zoned": false, 00:08:55.905 "supported_io_types": { 00:08:55.905 "read": true, 00:08:55.905 "write": true, 00:08:55.905 "unmap": true, 00:08:55.905 "flush": true, 00:08:55.905 "reset": true, 00:08:55.905 "nvme_admin": false, 00:08:55.905 "nvme_io": false, 00:08:55.905 "nvme_io_md": false, 00:08:55.905 "write_zeroes": true, 00:08:55.905 "zcopy": true, 00:08:55.905 "get_zone_info": false, 00:08:55.905 "zone_management": false, 00:08:55.905 "zone_append": false, 00:08:55.905 "compare": false, 00:08:55.905 "compare_and_write": false, 00:08:55.905 "abort": true, 00:08:55.905 "seek_hole": false, 00:08:55.906 "seek_data": false, 00:08:55.906 "copy": true, 00:08:55.906 "nvme_iov_md": false 00:08:55.906 }, 00:08:55.906 "memory_domains": [ 00:08:55.906 { 00:08:55.906 "dma_device_id": "system", 00:08:55.906 "dma_device_type": 1 00:08:55.906 }, 00:08:55.906 { 00:08:55.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.906 "dma_device_type": 2 00:08:55.906 } 00:08:55.906 ], 00:08:55.906 "driver_specific": {} 00:08:55.906 } 00:08:55.906 ] 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.906 "name": "Existed_Raid", 00:08:55.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.906 "strip_size_kb": 0, 00:08:55.906 "state": "configuring", 00:08:55.906 "raid_level": "raid1", 00:08:55.906 "superblock": false, 00:08:55.906 "num_base_bdevs": 3, 00:08:55.906 "num_base_bdevs_discovered": 2, 00:08:55.906 "num_base_bdevs_operational": 3, 00:08:55.906 "base_bdevs_list": [ 00:08:55.906 { 00:08:55.906 "name": "BaseBdev1", 00:08:55.906 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:55.906 "is_configured": true, 00:08:55.906 "data_offset": 0, 00:08:55.906 "data_size": 65536 00:08:55.906 }, 00:08:55.906 { 00:08:55.906 "name": "BaseBdev2", 00:08:55.906 "uuid": "b658fe42-3bf1-431f-8f3c-2b6be3e0f961", 00:08:55.906 "is_configured": true, 00:08:55.906 "data_offset": 0, 00:08:55.906 "data_size": 65536 00:08:55.906 }, 00:08:55.906 { 00:08:55.906 "name": "BaseBdev3", 00:08:55.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.906 "is_configured": false, 00:08:55.906 "data_offset": 0, 00:08:55.906 "data_size": 0 00:08:55.906 } 00:08:55.906 ] 00:08:55.906 }' 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.906 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.476 [2024-11-21 04:55:12.981576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.476 [2024-11-21 04:55:12.981673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:56.476 [2024-11-21 04:55:12.981727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:56.476 [2024-11-21 04:55:12.982075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:56.476 [2024-11-21 04:55:12.982314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:56.476 [2024-11-21 04:55:12.982362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:56.476 [2024-11-21 04:55:12.982631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.476 BaseBdev3 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.476 04:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.476 [ 00:08:56.476 { 00:08:56.476 "name": "BaseBdev3", 00:08:56.476 "aliases": [ 00:08:56.476 "70223f38-0fb1-40b8-989c-433692eb3fe4" 00:08:56.476 ], 00:08:56.476 "product_name": "Malloc disk", 00:08:56.476 "block_size": 512, 00:08:56.476 "num_blocks": 65536, 00:08:56.476 "uuid": "70223f38-0fb1-40b8-989c-433692eb3fe4", 00:08:56.476 "assigned_rate_limits": { 00:08:56.476 "rw_ios_per_sec": 0, 00:08:56.476 "rw_mbytes_per_sec": 0, 00:08:56.476 "r_mbytes_per_sec": 0, 00:08:56.476 "w_mbytes_per_sec": 0 00:08:56.476 }, 00:08:56.476 "claimed": true, 00:08:56.476 "claim_type": "exclusive_write", 00:08:56.476 "zoned": false, 00:08:56.476 "supported_io_types": { 00:08:56.476 "read": true, 00:08:56.476 "write": true, 00:08:56.476 "unmap": true, 00:08:56.476 "flush": true, 00:08:56.476 "reset": true, 00:08:56.476 "nvme_admin": false, 00:08:56.476 "nvme_io": false, 00:08:56.476 "nvme_io_md": false, 00:08:56.476 "write_zeroes": true, 00:08:56.476 "zcopy": true, 00:08:56.476 "get_zone_info": false, 00:08:56.476 "zone_management": false, 00:08:56.476 "zone_append": false, 00:08:56.476 "compare": false, 00:08:56.476 "compare_and_write": false, 00:08:56.476 "abort": true, 00:08:56.476 "seek_hole": false, 00:08:56.476 "seek_data": false, 00:08:56.476 "copy": true, 00:08:56.476 "nvme_iov_md": false 00:08:56.476 }, 00:08:56.476 "memory_domains": [ 00:08:56.476 { 00:08:56.476 "dma_device_id": "system", 00:08:56.476 "dma_device_type": 1 00:08:56.476 }, 00:08:56.476 { 00:08:56.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.476 "dma_device_type": 2 00:08:56.476 } 00:08:56.476 ], 00:08:56.476 "driver_specific": {} 00:08:56.476 } 00:08:56.476 ] 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.476 "name": "Existed_Raid", 00:08:56.476 "uuid": "f8d3420c-0c25-41b2-b8e9-9128e5397d9f", 00:08:56.476 "strip_size_kb": 0, 00:08:56.476 "state": "online", 00:08:56.476 "raid_level": "raid1", 00:08:56.476 "superblock": false, 00:08:56.476 "num_base_bdevs": 3, 00:08:56.476 "num_base_bdevs_discovered": 3, 00:08:56.476 "num_base_bdevs_operational": 3, 00:08:56.476 "base_bdevs_list": [ 00:08:56.476 { 00:08:56.476 "name": "BaseBdev1", 00:08:56.476 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:56.476 "is_configured": true, 00:08:56.476 "data_offset": 0, 00:08:56.476 "data_size": 65536 00:08:56.476 }, 00:08:56.476 { 00:08:56.476 "name": "BaseBdev2", 00:08:56.476 "uuid": "b658fe42-3bf1-431f-8f3c-2b6be3e0f961", 00:08:56.476 "is_configured": true, 00:08:56.476 "data_offset": 0, 00:08:56.476 "data_size": 65536 00:08:56.476 }, 00:08:56.476 { 00:08:56.476 "name": "BaseBdev3", 00:08:56.476 "uuid": "70223f38-0fb1-40b8-989c-433692eb3fe4", 00:08:56.476 "is_configured": true, 00:08:56.476 "data_offset": 0, 00:08:56.476 "data_size": 65536 00:08:56.476 } 00:08:56.476 ] 00:08:56.476 }' 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.476 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.736 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.736 [2024-11-21 04:55:13.453184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.996 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.996 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.996 "name": "Existed_Raid", 00:08:56.996 "aliases": [ 00:08:56.996 "f8d3420c-0c25-41b2-b8e9-9128e5397d9f" 00:08:56.996 ], 00:08:56.996 "product_name": "Raid Volume", 00:08:56.996 "block_size": 512, 00:08:56.996 "num_blocks": 65536, 00:08:56.996 "uuid": "f8d3420c-0c25-41b2-b8e9-9128e5397d9f", 00:08:56.996 "assigned_rate_limits": { 00:08:56.996 "rw_ios_per_sec": 0, 00:08:56.996 "rw_mbytes_per_sec": 0, 00:08:56.996 "r_mbytes_per_sec": 0, 00:08:56.996 "w_mbytes_per_sec": 0 00:08:56.996 }, 00:08:56.996 "claimed": false, 00:08:56.996 "zoned": false, 00:08:56.996 "supported_io_types": { 00:08:56.996 "read": true, 00:08:56.996 "write": true, 00:08:56.996 "unmap": false, 00:08:56.996 "flush": false, 00:08:56.996 "reset": true, 00:08:56.996 "nvme_admin": false, 00:08:56.996 "nvme_io": false, 00:08:56.996 "nvme_io_md": false, 00:08:56.996 "write_zeroes": true, 00:08:56.996 "zcopy": false, 00:08:56.996 "get_zone_info": false, 00:08:56.996 "zone_management": false, 00:08:56.996 "zone_append": false, 00:08:56.996 "compare": false, 00:08:56.996 "compare_and_write": false, 00:08:56.996 "abort": false, 00:08:56.996 "seek_hole": false, 00:08:56.996 "seek_data": false, 00:08:56.997 "copy": false, 00:08:56.997 "nvme_iov_md": false 00:08:56.997 }, 00:08:56.997 "memory_domains": [ 00:08:56.997 { 00:08:56.997 "dma_device_id": "system", 00:08:56.997 "dma_device_type": 1 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.997 "dma_device_type": 2 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "dma_device_id": "system", 00:08:56.997 "dma_device_type": 1 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.997 "dma_device_type": 2 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "dma_device_id": "system", 00:08:56.997 "dma_device_type": 1 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.997 "dma_device_type": 2 00:08:56.997 } 00:08:56.997 ], 00:08:56.997 "driver_specific": { 00:08:56.997 "raid": { 00:08:56.997 "uuid": "f8d3420c-0c25-41b2-b8e9-9128e5397d9f", 00:08:56.997 "strip_size_kb": 0, 00:08:56.997 "state": "online", 00:08:56.997 "raid_level": "raid1", 00:08:56.997 "superblock": false, 00:08:56.997 "num_base_bdevs": 3, 00:08:56.997 "num_base_bdevs_discovered": 3, 00:08:56.997 "num_base_bdevs_operational": 3, 00:08:56.997 "base_bdevs_list": [ 00:08:56.997 { 00:08:56.997 "name": "BaseBdev1", 00:08:56.997 "uuid": "f6c8b160-b526-40aa-a726-8d9a429ce3ef", 00:08:56.997 "is_configured": true, 00:08:56.997 "data_offset": 0, 00:08:56.997 "data_size": 65536 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "name": "BaseBdev2", 00:08:56.997 "uuid": "b658fe42-3bf1-431f-8f3c-2b6be3e0f961", 00:08:56.997 "is_configured": true, 00:08:56.997 "data_offset": 0, 00:08:56.997 "data_size": 65536 00:08:56.997 }, 00:08:56.997 { 00:08:56.997 "name": "BaseBdev3", 00:08:56.997 "uuid": "70223f38-0fb1-40b8-989c-433692eb3fe4", 00:08:56.997 "is_configured": true, 00:08:56.997 "data_offset": 0, 00:08:56.997 "data_size": 65536 00:08:56.997 } 00:08:56.997 ] 00:08:56.997 } 00:08:56.997 } 00:08:56.997 }' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.997 BaseBdev2 00:08:56.997 BaseBdev3' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.997 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.997 [2024-11-21 04:55:13.716402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.257 "name": "Existed_Raid", 00:08:57.257 "uuid": "f8d3420c-0c25-41b2-b8e9-9128e5397d9f", 00:08:57.257 "strip_size_kb": 0, 00:08:57.257 "state": "online", 00:08:57.257 "raid_level": "raid1", 00:08:57.257 "superblock": false, 00:08:57.257 "num_base_bdevs": 3, 00:08:57.257 "num_base_bdevs_discovered": 2, 00:08:57.257 "num_base_bdevs_operational": 2, 00:08:57.257 "base_bdevs_list": [ 00:08:57.257 { 00:08:57.257 "name": null, 00:08:57.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.257 "is_configured": false, 00:08:57.257 "data_offset": 0, 00:08:57.257 "data_size": 65536 00:08:57.257 }, 00:08:57.257 { 00:08:57.257 "name": "BaseBdev2", 00:08:57.257 "uuid": "b658fe42-3bf1-431f-8f3c-2b6be3e0f961", 00:08:57.257 "is_configured": true, 00:08:57.257 "data_offset": 0, 00:08:57.257 "data_size": 65536 00:08:57.257 }, 00:08:57.257 { 00:08:57.257 "name": "BaseBdev3", 00:08:57.257 "uuid": "70223f38-0fb1-40b8-989c-433692eb3fe4", 00:08:57.257 "is_configured": true, 00:08:57.257 "data_offset": 0, 00:08:57.257 "data_size": 65536 00:08:57.257 } 00:08:57.257 ] 00:08:57.257 }' 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.257 04:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.518 [2024-11-21 04:55:14.206972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.518 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 [2024-11-21 04:55:14.274128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.779 [2024-11-21 04:55:14.274284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.779 [2024-11-21 04:55:14.285343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.779 [2024-11-21 04:55:14.285461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.779 [2024-11-21 04:55:14.285504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 BaseBdev2 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.779 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 [ 00:08:57.779 { 00:08:57.779 "name": "BaseBdev2", 00:08:57.779 "aliases": [ 00:08:57.779 "c8359529-6744-4d68-aead-e6539106fcd9" 00:08:57.779 ], 00:08:57.779 "product_name": "Malloc disk", 00:08:57.779 "block_size": 512, 00:08:57.779 "num_blocks": 65536, 00:08:57.779 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:57.779 "assigned_rate_limits": { 00:08:57.779 "rw_ios_per_sec": 0, 00:08:57.779 "rw_mbytes_per_sec": 0, 00:08:57.779 "r_mbytes_per_sec": 0, 00:08:57.779 "w_mbytes_per_sec": 0 00:08:57.779 }, 00:08:57.779 "claimed": false, 00:08:57.779 "zoned": false, 00:08:57.779 "supported_io_types": { 00:08:57.779 "read": true, 00:08:57.779 "write": true, 00:08:57.779 "unmap": true, 00:08:57.779 "flush": true, 00:08:57.779 "reset": true, 00:08:57.779 "nvme_admin": false, 00:08:57.779 "nvme_io": false, 00:08:57.779 "nvme_io_md": false, 00:08:57.779 "write_zeroes": true, 00:08:57.779 "zcopy": true, 00:08:57.779 "get_zone_info": false, 00:08:57.779 "zone_management": false, 00:08:57.779 "zone_append": false, 00:08:57.779 "compare": false, 00:08:57.779 "compare_and_write": false, 00:08:57.779 "abort": true, 00:08:57.779 "seek_hole": false, 00:08:57.779 "seek_data": false, 00:08:57.779 "copy": true, 00:08:57.779 "nvme_iov_md": false 00:08:57.779 }, 00:08:57.779 "memory_domains": [ 00:08:57.779 { 00:08:57.779 "dma_device_id": "system", 00:08:57.779 "dma_device_type": 1 00:08:57.779 }, 00:08:57.779 { 00:08:57.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.779 "dma_device_type": 2 00:08:57.779 } 00:08:57.779 ], 00:08:57.779 "driver_specific": {} 00:08:57.779 } 00:08:57.780 ] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.780 BaseBdev3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.780 [ 00:08:57.780 { 00:08:57.780 "name": "BaseBdev3", 00:08:57.780 "aliases": [ 00:08:57.780 "709e97e4-a6ef-4bff-b89d-5e5da3a370b6" 00:08:57.780 ], 00:08:57.780 "product_name": "Malloc disk", 00:08:57.780 "block_size": 512, 00:08:57.780 "num_blocks": 65536, 00:08:57.780 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:57.780 "assigned_rate_limits": { 00:08:57.780 "rw_ios_per_sec": 0, 00:08:57.780 "rw_mbytes_per_sec": 0, 00:08:57.780 "r_mbytes_per_sec": 0, 00:08:57.780 "w_mbytes_per_sec": 0 00:08:57.780 }, 00:08:57.780 "claimed": false, 00:08:57.780 "zoned": false, 00:08:57.780 "supported_io_types": { 00:08:57.780 "read": true, 00:08:57.780 "write": true, 00:08:57.780 "unmap": true, 00:08:57.780 "flush": true, 00:08:57.780 "reset": true, 00:08:57.780 "nvme_admin": false, 00:08:57.780 "nvme_io": false, 00:08:57.780 "nvme_io_md": false, 00:08:57.780 "write_zeroes": true, 00:08:57.780 "zcopy": true, 00:08:57.780 "get_zone_info": false, 00:08:57.780 "zone_management": false, 00:08:57.780 "zone_append": false, 00:08:57.780 "compare": false, 00:08:57.780 "compare_and_write": false, 00:08:57.780 "abort": true, 00:08:57.780 "seek_hole": false, 00:08:57.780 "seek_data": false, 00:08:57.780 "copy": true, 00:08:57.780 "nvme_iov_md": false 00:08:57.780 }, 00:08:57.780 "memory_domains": [ 00:08:57.780 { 00:08:57.780 "dma_device_id": "system", 00:08:57.780 "dma_device_type": 1 00:08:57.780 }, 00:08:57.780 { 00:08:57.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.780 "dma_device_type": 2 00:08:57.780 } 00:08:57.780 ], 00:08:57.780 "driver_specific": {} 00:08:57.780 } 00:08:57.780 ] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.780 [2024-11-21 04:55:14.450542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.780 [2024-11-21 04:55:14.450671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.780 [2024-11-21 04:55:14.450718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.780 [2024-11-21 04:55:14.452674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.780 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.040 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.040 "name": "Existed_Raid", 00:08:58.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.040 "strip_size_kb": 0, 00:08:58.040 "state": "configuring", 00:08:58.040 "raid_level": "raid1", 00:08:58.040 "superblock": false, 00:08:58.040 "num_base_bdevs": 3, 00:08:58.040 "num_base_bdevs_discovered": 2, 00:08:58.040 "num_base_bdevs_operational": 3, 00:08:58.040 "base_bdevs_list": [ 00:08:58.040 { 00:08:58.040 "name": "BaseBdev1", 00:08:58.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.040 "is_configured": false, 00:08:58.040 "data_offset": 0, 00:08:58.040 "data_size": 0 00:08:58.040 }, 00:08:58.040 { 00:08:58.040 "name": "BaseBdev2", 00:08:58.040 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:58.040 "is_configured": true, 00:08:58.040 "data_offset": 0, 00:08:58.040 "data_size": 65536 00:08:58.040 }, 00:08:58.040 { 00:08:58.040 "name": "BaseBdev3", 00:08:58.040 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:58.040 "is_configured": true, 00:08:58.040 "data_offset": 0, 00:08:58.040 "data_size": 65536 00:08:58.040 } 00:08:58.040 ] 00:08:58.040 }' 00:08:58.040 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.040 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 [2024-11-21 04:55:14.901710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.300 "name": "Existed_Raid", 00:08:58.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.300 "strip_size_kb": 0, 00:08:58.300 "state": "configuring", 00:08:58.300 "raid_level": "raid1", 00:08:58.300 "superblock": false, 00:08:58.300 "num_base_bdevs": 3, 00:08:58.300 "num_base_bdevs_discovered": 1, 00:08:58.300 "num_base_bdevs_operational": 3, 00:08:58.300 "base_bdevs_list": [ 00:08:58.300 { 00:08:58.300 "name": "BaseBdev1", 00:08:58.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.300 "is_configured": false, 00:08:58.300 "data_offset": 0, 00:08:58.300 "data_size": 0 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "name": null, 00:08:58.300 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:58.300 "is_configured": false, 00:08:58.300 "data_offset": 0, 00:08:58.300 "data_size": 65536 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "name": "BaseBdev3", 00:08:58.300 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:58.300 "is_configured": true, 00:08:58.300 "data_offset": 0, 00:08:58.300 "data_size": 65536 00:08:58.300 } 00:08:58.300 ] 00:08:58.300 }' 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.300 04:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 BaseBdev1 00:08:58.870 [2024-11-21 04:55:15.419846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 [ 00:08:58.870 { 00:08:58.870 "name": "BaseBdev1", 00:08:58.870 "aliases": [ 00:08:58.870 "7a8855da-1d74-4617-bad1-f2c210039947" 00:08:58.870 ], 00:08:58.870 "product_name": "Malloc disk", 00:08:58.870 "block_size": 512, 00:08:58.870 "num_blocks": 65536, 00:08:58.870 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:08:58.870 "assigned_rate_limits": { 00:08:58.870 "rw_ios_per_sec": 0, 00:08:58.870 "rw_mbytes_per_sec": 0, 00:08:58.870 "r_mbytes_per_sec": 0, 00:08:58.870 "w_mbytes_per_sec": 0 00:08:58.870 }, 00:08:58.870 "claimed": true, 00:08:58.870 "claim_type": "exclusive_write", 00:08:58.870 "zoned": false, 00:08:58.870 "supported_io_types": { 00:08:58.870 "read": true, 00:08:58.870 "write": true, 00:08:58.870 "unmap": true, 00:08:58.870 "flush": true, 00:08:58.870 "reset": true, 00:08:58.870 "nvme_admin": false, 00:08:58.870 "nvme_io": false, 00:08:58.870 "nvme_io_md": false, 00:08:58.870 "write_zeroes": true, 00:08:58.870 "zcopy": true, 00:08:58.870 "get_zone_info": false, 00:08:58.870 "zone_management": false, 00:08:58.870 "zone_append": false, 00:08:58.870 "compare": false, 00:08:58.870 "compare_and_write": false, 00:08:58.870 "abort": true, 00:08:58.870 "seek_hole": false, 00:08:58.870 "seek_data": false, 00:08:58.870 "copy": true, 00:08:58.870 "nvme_iov_md": false 00:08:58.870 }, 00:08:58.870 "memory_domains": [ 00:08:58.870 { 00:08:58.870 "dma_device_id": "system", 00:08:58.870 "dma_device_type": 1 00:08:58.870 }, 00:08:58.870 { 00:08:58.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.870 "dma_device_type": 2 00:08:58.870 } 00:08:58.870 ], 00:08:58.870 "driver_specific": {} 00:08:58.870 } 00:08:58.870 ] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.870 "name": "Existed_Raid", 00:08:58.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.870 "strip_size_kb": 0, 00:08:58.870 "state": "configuring", 00:08:58.870 "raid_level": "raid1", 00:08:58.870 "superblock": false, 00:08:58.870 "num_base_bdevs": 3, 00:08:58.870 "num_base_bdevs_discovered": 2, 00:08:58.870 "num_base_bdevs_operational": 3, 00:08:58.870 "base_bdevs_list": [ 00:08:58.870 { 00:08:58.870 "name": "BaseBdev1", 00:08:58.870 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:08:58.870 "is_configured": true, 00:08:58.870 "data_offset": 0, 00:08:58.870 "data_size": 65536 00:08:58.870 }, 00:08:58.870 { 00:08:58.870 "name": null, 00:08:58.870 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:58.870 "is_configured": false, 00:08:58.870 "data_offset": 0, 00:08:58.870 "data_size": 65536 00:08:58.870 }, 00:08:58.870 { 00:08:58.870 "name": "BaseBdev3", 00:08:58.870 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:58.870 "is_configured": true, 00:08:58.870 "data_offset": 0, 00:08:58.870 "data_size": 65536 00:08:58.870 } 00:08:58.870 ] 00:08:58.870 }' 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.870 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.440 [2024-11-21 04:55:15.975002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.440 04:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.440 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.440 "name": "Existed_Raid", 00:08:59.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.440 "strip_size_kb": 0, 00:08:59.440 "state": "configuring", 00:08:59.440 "raid_level": "raid1", 00:08:59.440 "superblock": false, 00:08:59.440 "num_base_bdevs": 3, 00:08:59.441 "num_base_bdevs_discovered": 1, 00:08:59.441 "num_base_bdevs_operational": 3, 00:08:59.441 "base_bdevs_list": [ 00:08:59.441 { 00:08:59.441 "name": "BaseBdev1", 00:08:59.441 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:08:59.441 "is_configured": true, 00:08:59.441 "data_offset": 0, 00:08:59.441 "data_size": 65536 00:08:59.441 }, 00:08:59.441 { 00:08:59.441 "name": null, 00:08:59.441 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:59.441 "is_configured": false, 00:08:59.441 "data_offset": 0, 00:08:59.441 "data_size": 65536 00:08:59.441 }, 00:08:59.441 { 00:08:59.441 "name": null, 00:08:59.441 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:59.441 "is_configured": false, 00:08:59.441 "data_offset": 0, 00:08:59.441 "data_size": 65536 00:08:59.441 } 00:08:59.441 ] 00:08:59.441 }' 00:08:59.441 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.441 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.700 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.700 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.700 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.700 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 [2024-11-21 04:55:16.466230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.960 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.960 "name": "Existed_Raid", 00:08:59.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.960 "strip_size_kb": 0, 00:08:59.960 "state": "configuring", 00:08:59.960 "raid_level": "raid1", 00:08:59.960 "superblock": false, 00:08:59.960 "num_base_bdevs": 3, 00:08:59.960 "num_base_bdevs_discovered": 2, 00:08:59.960 "num_base_bdevs_operational": 3, 00:08:59.961 "base_bdevs_list": [ 00:08:59.961 { 00:08:59.961 "name": "BaseBdev1", 00:08:59.961 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:08:59.961 "is_configured": true, 00:08:59.961 "data_offset": 0, 00:08:59.961 "data_size": 65536 00:08:59.961 }, 00:08:59.961 { 00:08:59.961 "name": null, 00:08:59.961 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:08:59.961 "is_configured": false, 00:08:59.961 "data_offset": 0, 00:08:59.961 "data_size": 65536 00:08:59.961 }, 00:08:59.961 { 00:08:59.961 "name": "BaseBdev3", 00:08:59.961 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:08:59.961 "is_configured": true, 00:08:59.961 "data_offset": 0, 00:08:59.961 "data_size": 65536 00:08:59.961 } 00:08:59.961 ] 00:08:59.961 }' 00:08:59.961 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.961 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.221 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.221 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.221 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.481 [2024-11-21 04:55:16.977374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.481 04:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.481 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.481 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.481 "name": "Existed_Raid", 00:09:00.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.481 "strip_size_kb": 0, 00:09:00.481 "state": "configuring", 00:09:00.481 "raid_level": "raid1", 00:09:00.481 "superblock": false, 00:09:00.481 "num_base_bdevs": 3, 00:09:00.481 "num_base_bdevs_discovered": 1, 00:09:00.481 "num_base_bdevs_operational": 3, 00:09:00.481 "base_bdevs_list": [ 00:09:00.481 { 00:09:00.481 "name": null, 00:09:00.481 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:09:00.481 "is_configured": false, 00:09:00.481 "data_offset": 0, 00:09:00.481 "data_size": 65536 00:09:00.481 }, 00:09:00.481 { 00:09:00.481 "name": null, 00:09:00.481 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:09:00.481 "is_configured": false, 00:09:00.481 "data_offset": 0, 00:09:00.481 "data_size": 65536 00:09:00.481 }, 00:09:00.481 { 00:09:00.481 "name": "BaseBdev3", 00:09:00.481 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:09:00.481 "is_configured": true, 00:09:00.481 "data_offset": 0, 00:09:00.481 "data_size": 65536 00:09:00.481 } 00:09:00.481 ] 00:09:00.481 }' 00:09:00.481 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.481 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.741 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.741 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.741 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.741 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.001 [2024-11-21 04:55:17.522967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.001 "name": "Existed_Raid", 00:09:01.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.001 "strip_size_kb": 0, 00:09:01.001 "state": "configuring", 00:09:01.001 "raid_level": "raid1", 00:09:01.001 "superblock": false, 00:09:01.001 "num_base_bdevs": 3, 00:09:01.001 "num_base_bdevs_discovered": 2, 00:09:01.001 "num_base_bdevs_operational": 3, 00:09:01.001 "base_bdevs_list": [ 00:09:01.001 { 00:09:01.001 "name": null, 00:09:01.001 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:09:01.001 "is_configured": false, 00:09:01.001 "data_offset": 0, 00:09:01.001 "data_size": 65536 00:09:01.001 }, 00:09:01.001 { 00:09:01.001 "name": "BaseBdev2", 00:09:01.001 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:09:01.001 "is_configured": true, 00:09:01.001 "data_offset": 0, 00:09:01.001 "data_size": 65536 00:09:01.001 }, 00:09:01.001 { 00:09:01.001 "name": "BaseBdev3", 00:09:01.001 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:09:01.001 "is_configured": true, 00:09:01.001 "data_offset": 0, 00:09:01.001 "data_size": 65536 00:09:01.001 } 00:09:01.001 ] 00:09:01.001 }' 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.001 04:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.304 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.304 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.304 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.304 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.304 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a8855da-1d74-4617-bad1-f2c210039947 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.564 [2024-11-21 04:55:18.100865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.564 [2024-11-21 04:55:18.100982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:01.564 [2024-11-21 04:55:18.100995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:01.564 [2024-11-21 04:55:18.101278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:01.564 [2024-11-21 04:55:18.101427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:01.564 [2024-11-21 04:55:18.101441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:01.564 [2024-11-21 04:55:18.101631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.564 NewBaseBdev 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.564 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.564 [ 00:09:01.564 { 00:09:01.564 "name": "NewBaseBdev", 00:09:01.564 "aliases": [ 00:09:01.564 "7a8855da-1d74-4617-bad1-f2c210039947" 00:09:01.564 ], 00:09:01.564 "product_name": "Malloc disk", 00:09:01.564 "block_size": 512, 00:09:01.564 "num_blocks": 65536, 00:09:01.564 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:09:01.564 "assigned_rate_limits": { 00:09:01.564 "rw_ios_per_sec": 0, 00:09:01.564 "rw_mbytes_per_sec": 0, 00:09:01.564 "r_mbytes_per_sec": 0, 00:09:01.564 "w_mbytes_per_sec": 0 00:09:01.564 }, 00:09:01.564 "claimed": true, 00:09:01.564 "claim_type": "exclusive_write", 00:09:01.564 "zoned": false, 00:09:01.564 "supported_io_types": { 00:09:01.564 "read": true, 00:09:01.564 "write": true, 00:09:01.564 "unmap": true, 00:09:01.564 "flush": true, 00:09:01.564 "reset": true, 00:09:01.565 "nvme_admin": false, 00:09:01.565 "nvme_io": false, 00:09:01.565 "nvme_io_md": false, 00:09:01.565 "write_zeroes": true, 00:09:01.565 "zcopy": true, 00:09:01.565 "get_zone_info": false, 00:09:01.565 "zone_management": false, 00:09:01.565 "zone_append": false, 00:09:01.565 "compare": false, 00:09:01.565 "compare_and_write": false, 00:09:01.565 "abort": true, 00:09:01.565 "seek_hole": false, 00:09:01.565 "seek_data": false, 00:09:01.565 "copy": true, 00:09:01.565 "nvme_iov_md": false 00:09:01.565 }, 00:09:01.565 "memory_domains": [ 00:09:01.565 { 00:09:01.565 "dma_device_id": "system", 00:09:01.565 "dma_device_type": 1 00:09:01.565 }, 00:09:01.565 { 00:09:01.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.565 "dma_device_type": 2 00:09:01.565 } 00:09:01.565 ], 00:09:01.565 "driver_specific": {} 00:09:01.565 } 00:09:01.565 ] 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.565 "name": "Existed_Raid", 00:09:01.565 "uuid": "a2c945c2-fced-4543-bd09-c608c5a427f8", 00:09:01.565 "strip_size_kb": 0, 00:09:01.565 "state": "online", 00:09:01.565 "raid_level": "raid1", 00:09:01.565 "superblock": false, 00:09:01.565 "num_base_bdevs": 3, 00:09:01.565 "num_base_bdevs_discovered": 3, 00:09:01.565 "num_base_bdevs_operational": 3, 00:09:01.565 "base_bdevs_list": [ 00:09:01.565 { 00:09:01.565 "name": "NewBaseBdev", 00:09:01.565 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:09:01.565 "is_configured": true, 00:09:01.565 "data_offset": 0, 00:09:01.565 "data_size": 65536 00:09:01.565 }, 00:09:01.565 { 00:09:01.565 "name": "BaseBdev2", 00:09:01.565 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:09:01.565 "is_configured": true, 00:09:01.565 "data_offset": 0, 00:09:01.565 "data_size": 65536 00:09:01.565 }, 00:09:01.565 { 00:09:01.565 "name": "BaseBdev3", 00:09:01.565 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:09:01.565 "is_configured": true, 00:09:01.565 "data_offset": 0, 00:09:01.565 "data_size": 65536 00:09:01.565 } 00:09:01.565 ] 00:09:01.565 }' 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.565 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.825 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.825 [2024-11-21 04:55:18.540500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.084 "name": "Existed_Raid", 00:09:02.084 "aliases": [ 00:09:02.084 "a2c945c2-fced-4543-bd09-c608c5a427f8" 00:09:02.084 ], 00:09:02.084 "product_name": "Raid Volume", 00:09:02.084 "block_size": 512, 00:09:02.084 "num_blocks": 65536, 00:09:02.084 "uuid": "a2c945c2-fced-4543-bd09-c608c5a427f8", 00:09:02.084 "assigned_rate_limits": { 00:09:02.084 "rw_ios_per_sec": 0, 00:09:02.084 "rw_mbytes_per_sec": 0, 00:09:02.084 "r_mbytes_per_sec": 0, 00:09:02.084 "w_mbytes_per_sec": 0 00:09:02.084 }, 00:09:02.084 "claimed": false, 00:09:02.084 "zoned": false, 00:09:02.084 "supported_io_types": { 00:09:02.084 "read": true, 00:09:02.084 "write": true, 00:09:02.084 "unmap": false, 00:09:02.084 "flush": false, 00:09:02.084 "reset": true, 00:09:02.084 "nvme_admin": false, 00:09:02.084 "nvme_io": false, 00:09:02.084 "nvme_io_md": false, 00:09:02.084 "write_zeroes": true, 00:09:02.084 "zcopy": false, 00:09:02.084 "get_zone_info": false, 00:09:02.084 "zone_management": false, 00:09:02.084 "zone_append": false, 00:09:02.084 "compare": false, 00:09:02.084 "compare_and_write": false, 00:09:02.084 "abort": false, 00:09:02.084 "seek_hole": false, 00:09:02.084 "seek_data": false, 00:09:02.084 "copy": false, 00:09:02.084 "nvme_iov_md": false 00:09:02.084 }, 00:09:02.084 "memory_domains": [ 00:09:02.084 { 00:09:02.084 "dma_device_id": "system", 00:09:02.084 "dma_device_type": 1 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.084 "dma_device_type": 2 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "dma_device_id": "system", 00:09:02.084 "dma_device_type": 1 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.084 "dma_device_type": 2 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "dma_device_id": "system", 00:09:02.084 "dma_device_type": 1 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.084 "dma_device_type": 2 00:09:02.084 } 00:09:02.084 ], 00:09:02.084 "driver_specific": { 00:09:02.084 "raid": { 00:09:02.084 "uuid": "a2c945c2-fced-4543-bd09-c608c5a427f8", 00:09:02.084 "strip_size_kb": 0, 00:09:02.084 "state": "online", 00:09:02.084 "raid_level": "raid1", 00:09:02.084 "superblock": false, 00:09:02.084 "num_base_bdevs": 3, 00:09:02.084 "num_base_bdevs_discovered": 3, 00:09:02.084 "num_base_bdevs_operational": 3, 00:09:02.084 "base_bdevs_list": [ 00:09:02.084 { 00:09:02.084 "name": "NewBaseBdev", 00:09:02.084 "uuid": "7a8855da-1d74-4617-bad1-f2c210039947", 00:09:02.084 "is_configured": true, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "name": "BaseBdev2", 00:09:02.084 "uuid": "c8359529-6744-4d68-aead-e6539106fcd9", 00:09:02.084 "is_configured": true, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "name": "BaseBdev3", 00:09:02.084 "uuid": "709e97e4-a6ef-4bff-b89d-5e5da3a370b6", 00:09:02.084 "is_configured": true, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 } 00:09:02.084 ] 00:09:02.084 } 00:09:02.084 } 00:09:02.084 }' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.084 BaseBdev2 00:09:02.084 BaseBdev3' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.084 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.344 [2024-11-21 04:55:18.835649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.344 [2024-11-21 04:55:18.835682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.344 [2024-11-21 04:55:18.835764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.344 [2024-11-21 04:55:18.836022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.344 [2024-11-21 04:55:18.836032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78614 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78614 ']' 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78614 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78614 00:09:02.344 killing process with pid 78614 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78614' 00:09:02.344 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78614 00:09:02.344 [2024-11-21 04:55:18.893635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.345 04:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78614 00:09:02.345 [2024-11-21 04:55:18.925320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.606 00:09:02.606 real 0m8.918s 00:09:02.606 user 0m15.276s 00:09:02.606 sys 0m1.795s 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.606 ************************************ 00:09:02.606 END TEST raid_state_function_test 00:09:02.606 ************************************ 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.606 04:55:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:02.606 04:55:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.606 04:55:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.606 04:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.606 ************************************ 00:09:02.606 START TEST raid_state_function_test_sb 00:09:02.606 ************************************ 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:02.606 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79224 00:09:02.607 Process raid pid: 79224 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79224' 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79224 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79224 ']' 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.607 04:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.607 [2024-11-21 04:55:19.315885] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:02.607 [2024-11-21 04:55:19.316117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.868 [2024-11-21 04:55:19.475907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.868 [2024-11-21 04:55:19.503610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.868 [2024-11-21 04:55:19.545411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.868 [2024-11-21 04:55:19.545447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.807 [2024-11-21 04:55:20.202531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.807 [2024-11-21 04:55:20.202588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.807 [2024-11-21 04:55:20.202615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.807 [2024-11-21 04:55:20.202626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.807 [2024-11-21 04:55:20.202633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.807 [2024-11-21 04:55:20.202645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.807 "name": "Existed_Raid", 00:09:03.807 "uuid": "f21772f4-847f-4e94-ae36-3575b505f6a4", 00:09:03.807 "strip_size_kb": 0, 00:09:03.807 "state": "configuring", 00:09:03.807 "raid_level": "raid1", 00:09:03.807 "superblock": true, 00:09:03.807 "num_base_bdevs": 3, 00:09:03.807 "num_base_bdevs_discovered": 0, 00:09:03.807 "num_base_bdevs_operational": 3, 00:09:03.807 "base_bdevs_list": [ 00:09:03.807 { 00:09:03.807 "name": "BaseBdev1", 00:09:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.807 "is_configured": false, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 0 00:09:03.807 }, 00:09:03.807 { 00:09:03.807 "name": "BaseBdev2", 00:09:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.807 "is_configured": false, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 0 00:09:03.807 }, 00:09:03.807 { 00:09:03.807 "name": "BaseBdev3", 00:09:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.807 "is_configured": false, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 0 00:09:03.807 } 00:09:03.807 ] 00:09:03.807 }' 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.807 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.067 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 [2024-11-21 04:55:20.613722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.068 [2024-11-21 04:55:20.613821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 [2024-11-21 04:55:20.625703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.068 [2024-11-21 04:55:20.625807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.068 [2024-11-21 04:55:20.625835] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.068 [2024-11-21 04:55:20.625858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.068 [2024-11-21 04:55:20.625877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.068 [2024-11-21 04:55:20.625897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 [2024-11-21 04:55:20.646494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.068 BaseBdev1 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 [ 00:09:04.068 { 00:09:04.068 "name": "BaseBdev1", 00:09:04.068 "aliases": [ 00:09:04.068 "93d2eec4-99d2-481f-9a2a-42ac7afed9ba" 00:09:04.068 ], 00:09:04.068 "product_name": "Malloc disk", 00:09:04.068 "block_size": 512, 00:09:04.068 "num_blocks": 65536, 00:09:04.068 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:04.068 "assigned_rate_limits": { 00:09:04.068 "rw_ios_per_sec": 0, 00:09:04.068 "rw_mbytes_per_sec": 0, 00:09:04.068 "r_mbytes_per_sec": 0, 00:09:04.068 "w_mbytes_per_sec": 0 00:09:04.068 }, 00:09:04.068 "claimed": true, 00:09:04.068 "claim_type": "exclusive_write", 00:09:04.068 "zoned": false, 00:09:04.068 "supported_io_types": { 00:09:04.068 "read": true, 00:09:04.068 "write": true, 00:09:04.068 "unmap": true, 00:09:04.068 "flush": true, 00:09:04.068 "reset": true, 00:09:04.068 "nvme_admin": false, 00:09:04.068 "nvme_io": false, 00:09:04.068 "nvme_io_md": false, 00:09:04.068 "write_zeroes": true, 00:09:04.068 "zcopy": true, 00:09:04.068 "get_zone_info": false, 00:09:04.068 "zone_management": false, 00:09:04.068 "zone_append": false, 00:09:04.068 "compare": false, 00:09:04.068 "compare_and_write": false, 00:09:04.068 "abort": true, 00:09:04.068 "seek_hole": false, 00:09:04.068 "seek_data": false, 00:09:04.068 "copy": true, 00:09:04.068 "nvme_iov_md": false 00:09:04.068 }, 00:09:04.068 "memory_domains": [ 00:09:04.068 { 00:09:04.068 "dma_device_id": "system", 00:09:04.068 "dma_device_type": 1 00:09:04.068 }, 00:09:04.068 { 00:09:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.068 "dma_device_type": 2 00:09:04.068 } 00:09:04.068 ], 00:09:04.068 "driver_specific": {} 00:09:04.068 } 00:09:04.068 ] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.068 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.068 "name": "Existed_Raid", 00:09:04.068 "uuid": "c99b8522-d7a4-4ba1-a18e-e179bbda3bab", 00:09:04.068 "strip_size_kb": 0, 00:09:04.068 "state": "configuring", 00:09:04.068 "raid_level": "raid1", 00:09:04.068 "superblock": true, 00:09:04.068 "num_base_bdevs": 3, 00:09:04.069 "num_base_bdevs_discovered": 1, 00:09:04.069 "num_base_bdevs_operational": 3, 00:09:04.069 "base_bdevs_list": [ 00:09:04.069 { 00:09:04.069 "name": "BaseBdev1", 00:09:04.069 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:04.069 "is_configured": true, 00:09:04.069 "data_offset": 2048, 00:09:04.069 "data_size": 63488 00:09:04.069 }, 00:09:04.069 { 00:09:04.069 "name": "BaseBdev2", 00:09:04.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.069 "is_configured": false, 00:09:04.069 "data_offset": 0, 00:09:04.069 "data_size": 0 00:09:04.069 }, 00:09:04.069 { 00:09:04.069 "name": "BaseBdev3", 00:09:04.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.069 "is_configured": false, 00:09:04.069 "data_offset": 0, 00:09:04.069 "data_size": 0 00:09:04.069 } 00:09:04.069 ] 00:09:04.069 }' 00:09:04.069 04:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.069 04:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.638 [2024-11-21 04:55:21.105746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.638 [2024-11-21 04:55:21.105802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.638 [2024-11-21 04:55:21.117749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.638 [2024-11-21 04:55:21.119600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.638 [2024-11-21 04:55:21.119684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.638 [2024-11-21 04:55:21.119716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.638 [2024-11-21 04:55:21.119758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.638 "name": "Existed_Raid", 00:09:04.638 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:04.638 "strip_size_kb": 0, 00:09:04.638 "state": "configuring", 00:09:04.638 "raid_level": "raid1", 00:09:04.638 "superblock": true, 00:09:04.638 "num_base_bdevs": 3, 00:09:04.638 "num_base_bdevs_discovered": 1, 00:09:04.638 "num_base_bdevs_operational": 3, 00:09:04.638 "base_bdevs_list": [ 00:09:04.638 { 00:09:04.638 "name": "BaseBdev1", 00:09:04.638 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:04.638 "is_configured": true, 00:09:04.638 "data_offset": 2048, 00:09:04.638 "data_size": 63488 00:09:04.638 }, 00:09:04.638 { 00:09:04.638 "name": "BaseBdev2", 00:09:04.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.638 "is_configured": false, 00:09:04.638 "data_offset": 0, 00:09:04.638 "data_size": 0 00:09:04.638 }, 00:09:04.638 { 00:09:04.638 "name": "BaseBdev3", 00:09:04.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.638 "is_configured": false, 00:09:04.638 "data_offset": 0, 00:09:04.638 "data_size": 0 00:09:04.638 } 00:09:04.638 ] 00:09:04.638 }' 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.638 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.898 BaseBdev2 00:09:04.898 [2024-11-21 04:55:21.563850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.898 [ 00:09:04.898 { 00:09:04.898 "name": "BaseBdev2", 00:09:04.898 "aliases": [ 00:09:04.898 "91990c14-fd20-466d-9673-3ee1e1c74565" 00:09:04.898 ], 00:09:04.898 "product_name": "Malloc disk", 00:09:04.898 "block_size": 512, 00:09:04.898 "num_blocks": 65536, 00:09:04.898 "uuid": "91990c14-fd20-466d-9673-3ee1e1c74565", 00:09:04.898 "assigned_rate_limits": { 00:09:04.898 "rw_ios_per_sec": 0, 00:09:04.898 "rw_mbytes_per_sec": 0, 00:09:04.898 "r_mbytes_per_sec": 0, 00:09:04.898 "w_mbytes_per_sec": 0 00:09:04.898 }, 00:09:04.898 "claimed": true, 00:09:04.898 "claim_type": "exclusive_write", 00:09:04.898 "zoned": false, 00:09:04.898 "supported_io_types": { 00:09:04.898 "read": true, 00:09:04.898 "write": true, 00:09:04.898 "unmap": true, 00:09:04.898 "flush": true, 00:09:04.898 "reset": true, 00:09:04.898 "nvme_admin": false, 00:09:04.898 "nvme_io": false, 00:09:04.898 "nvme_io_md": false, 00:09:04.898 "write_zeroes": true, 00:09:04.898 "zcopy": true, 00:09:04.898 "get_zone_info": false, 00:09:04.898 "zone_management": false, 00:09:04.898 "zone_append": false, 00:09:04.898 "compare": false, 00:09:04.898 "compare_and_write": false, 00:09:04.898 "abort": true, 00:09:04.898 "seek_hole": false, 00:09:04.898 "seek_data": false, 00:09:04.898 "copy": true, 00:09:04.898 "nvme_iov_md": false 00:09:04.898 }, 00:09:04.898 "memory_domains": [ 00:09:04.898 { 00:09:04.898 "dma_device_id": "system", 00:09:04.898 "dma_device_type": 1 00:09:04.898 }, 00:09:04.898 { 00:09:04.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.898 "dma_device_type": 2 00:09:04.898 } 00:09:04.898 ], 00:09:04.898 "driver_specific": {} 00:09:04.898 } 00:09:04.898 ] 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.898 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.158 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.158 "name": "Existed_Raid", 00:09:05.158 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:05.158 "strip_size_kb": 0, 00:09:05.158 "state": "configuring", 00:09:05.158 "raid_level": "raid1", 00:09:05.158 "superblock": true, 00:09:05.158 "num_base_bdevs": 3, 00:09:05.158 "num_base_bdevs_discovered": 2, 00:09:05.158 "num_base_bdevs_operational": 3, 00:09:05.158 "base_bdevs_list": [ 00:09:05.158 { 00:09:05.158 "name": "BaseBdev1", 00:09:05.158 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:05.158 "is_configured": true, 00:09:05.158 "data_offset": 2048, 00:09:05.158 "data_size": 63488 00:09:05.158 }, 00:09:05.158 { 00:09:05.158 "name": "BaseBdev2", 00:09:05.158 "uuid": "91990c14-fd20-466d-9673-3ee1e1c74565", 00:09:05.158 "is_configured": true, 00:09:05.158 "data_offset": 2048, 00:09:05.158 "data_size": 63488 00:09:05.158 }, 00:09:05.158 { 00:09:05.158 "name": "BaseBdev3", 00:09:05.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.158 "is_configured": false, 00:09:05.158 "data_offset": 0, 00:09:05.158 "data_size": 0 00:09:05.158 } 00:09:05.158 ] 00:09:05.158 }' 00:09:05.158 04:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.158 04:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.417 [2024-11-21 04:55:22.088519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.417 [2024-11-21 04:55:22.088749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:05.417 [2024-11-21 04:55:22.088767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.417 [2024-11-21 04:55:22.089081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:05.417 [2024-11-21 04:55:22.089275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:05.417 [2024-11-21 04:55:22.089290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:05.417 BaseBdev3 00:09:05.417 [2024-11-21 04:55:22.089446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.417 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.417 [ 00:09:05.417 { 00:09:05.417 "name": "BaseBdev3", 00:09:05.417 "aliases": [ 00:09:05.417 "578226ac-e9c8-4eb0-aa78-3f3cc513469f" 00:09:05.417 ], 00:09:05.417 "product_name": "Malloc disk", 00:09:05.417 "block_size": 512, 00:09:05.417 "num_blocks": 65536, 00:09:05.417 "uuid": "578226ac-e9c8-4eb0-aa78-3f3cc513469f", 00:09:05.417 "assigned_rate_limits": { 00:09:05.417 "rw_ios_per_sec": 0, 00:09:05.417 "rw_mbytes_per_sec": 0, 00:09:05.417 "r_mbytes_per_sec": 0, 00:09:05.417 "w_mbytes_per_sec": 0 00:09:05.417 }, 00:09:05.417 "claimed": true, 00:09:05.417 "claim_type": "exclusive_write", 00:09:05.417 "zoned": false, 00:09:05.417 "supported_io_types": { 00:09:05.417 "read": true, 00:09:05.417 "write": true, 00:09:05.417 "unmap": true, 00:09:05.417 "flush": true, 00:09:05.417 "reset": true, 00:09:05.417 "nvme_admin": false, 00:09:05.417 "nvme_io": false, 00:09:05.417 "nvme_io_md": false, 00:09:05.417 "write_zeroes": true, 00:09:05.417 "zcopy": true, 00:09:05.417 "get_zone_info": false, 00:09:05.417 "zone_management": false, 00:09:05.417 "zone_append": false, 00:09:05.417 "compare": false, 00:09:05.417 "compare_and_write": false, 00:09:05.417 "abort": true, 00:09:05.417 "seek_hole": false, 00:09:05.417 "seek_data": false, 00:09:05.417 "copy": true, 00:09:05.417 "nvme_iov_md": false 00:09:05.417 }, 00:09:05.417 "memory_domains": [ 00:09:05.417 { 00:09:05.417 "dma_device_id": "system", 00:09:05.417 "dma_device_type": 1 00:09:05.417 }, 00:09:05.417 { 00:09:05.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.417 "dma_device_type": 2 00:09:05.417 } 00:09:05.417 ], 00:09:05.417 "driver_specific": {} 00:09:05.417 } 00:09:05.417 ] 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.418 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.678 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.678 "name": "Existed_Raid", 00:09:05.678 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:05.678 "strip_size_kb": 0, 00:09:05.678 "state": "online", 00:09:05.678 "raid_level": "raid1", 00:09:05.678 "superblock": true, 00:09:05.678 "num_base_bdevs": 3, 00:09:05.678 "num_base_bdevs_discovered": 3, 00:09:05.678 "num_base_bdevs_operational": 3, 00:09:05.678 "base_bdevs_list": [ 00:09:05.678 { 00:09:05.678 "name": "BaseBdev1", 00:09:05.678 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev2", 00:09:05.678 "uuid": "91990c14-fd20-466d-9673-3ee1e1c74565", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev3", 00:09:05.678 "uuid": "578226ac-e9c8-4eb0-aa78-3f3cc513469f", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 } 00:09:05.678 ] 00:09:05.678 }' 00:09:05.678 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.678 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.938 [2024-11-21 04:55:22.520196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.938 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.938 "name": "Existed_Raid", 00:09:05.938 "aliases": [ 00:09:05.938 "bbb3238d-69da-4785-a1db-3f601c1a7a79" 00:09:05.938 ], 00:09:05.938 "product_name": "Raid Volume", 00:09:05.938 "block_size": 512, 00:09:05.938 "num_blocks": 63488, 00:09:05.938 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:05.938 "assigned_rate_limits": { 00:09:05.938 "rw_ios_per_sec": 0, 00:09:05.938 "rw_mbytes_per_sec": 0, 00:09:05.938 "r_mbytes_per_sec": 0, 00:09:05.938 "w_mbytes_per_sec": 0 00:09:05.938 }, 00:09:05.938 "claimed": false, 00:09:05.939 "zoned": false, 00:09:05.939 "supported_io_types": { 00:09:05.939 "read": true, 00:09:05.939 "write": true, 00:09:05.939 "unmap": false, 00:09:05.939 "flush": false, 00:09:05.939 "reset": true, 00:09:05.939 "nvme_admin": false, 00:09:05.939 "nvme_io": false, 00:09:05.939 "nvme_io_md": false, 00:09:05.939 "write_zeroes": true, 00:09:05.939 "zcopy": false, 00:09:05.939 "get_zone_info": false, 00:09:05.939 "zone_management": false, 00:09:05.939 "zone_append": false, 00:09:05.939 "compare": false, 00:09:05.939 "compare_and_write": false, 00:09:05.939 "abort": false, 00:09:05.939 "seek_hole": false, 00:09:05.939 "seek_data": false, 00:09:05.939 "copy": false, 00:09:05.939 "nvme_iov_md": false 00:09:05.939 }, 00:09:05.939 "memory_domains": [ 00:09:05.939 { 00:09:05.939 "dma_device_id": "system", 00:09:05.939 "dma_device_type": 1 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.939 "dma_device_type": 2 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "dma_device_id": "system", 00:09:05.939 "dma_device_type": 1 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.939 "dma_device_type": 2 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "dma_device_id": "system", 00:09:05.939 "dma_device_type": 1 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.939 "dma_device_type": 2 00:09:05.939 } 00:09:05.939 ], 00:09:05.939 "driver_specific": { 00:09:05.939 "raid": { 00:09:05.939 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:05.939 "strip_size_kb": 0, 00:09:05.939 "state": "online", 00:09:05.939 "raid_level": "raid1", 00:09:05.939 "superblock": true, 00:09:05.939 "num_base_bdevs": 3, 00:09:05.939 "num_base_bdevs_discovered": 3, 00:09:05.939 "num_base_bdevs_operational": 3, 00:09:05.939 "base_bdevs_list": [ 00:09:05.939 { 00:09:05.939 "name": "BaseBdev1", 00:09:05.939 "uuid": "93d2eec4-99d2-481f-9a2a-42ac7afed9ba", 00:09:05.939 "is_configured": true, 00:09:05.939 "data_offset": 2048, 00:09:05.939 "data_size": 63488 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "name": "BaseBdev2", 00:09:05.939 "uuid": "91990c14-fd20-466d-9673-3ee1e1c74565", 00:09:05.939 "is_configured": true, 00:09:05.939 "data_offset": 2048, 00:09:05.939 "data_size": 63488 00:09:05.939 }, 00:09:05.939 { 00:09:05.939 "name": "BaseBdev3", 00:09:05.939 "uuid": "578226ac-e9c8-4eb0-aa78-3f3cc513469f", 00:09:05.939 "is_configured": true, 00:09:05.939 "data_offset": 2048, 00:09:05.939 "data_size": 63488 00:09:05.939 } 00:09:05.939 ] 00:09:05.939 } 00:09:05.939 } 00:09:05.939 }' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.939 BaseBdev2 00:09:05.939 BaseBdev3' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.939 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.200 [2024-11-21 04:55:22.831461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.200 "name": "Existed_Raid", 00:09:06.200 "uuid": "bbb3238d-69da-4785-a1db-3f601c1a7a79", 00:09:06.200 "strip_size_kb": 0, 00:09:06.200 "state": "online", 00:09:06.200 "raid_level": "raid1", 00:09:06.200 "superblock": true, 00:09:06.200 "num_base_bdevs": 3, 00:09:06.200 "num_base_bdevs_discovered": 2, 00:09:06.200 "num_base_bdevs_operational": 2, 00:09:06.200 "base_bdevs_list": [ 00:09:06.200 { 00:09:06.200 "name": null, 00:09:06.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.200 "is_configured": false, 00:09:06.200 "data_offset": 0, 00:09:06.200 "data_size": 63488 00:09:06.200 }, 00:09:06.200 { 00:09:06.200 "name": "BaseBdev2", 00:09:06.200 "uuid": "91990c14-fd20-466d-9673-3ee1e1c74565", 00:09:06.200 "is_configured": true, 00:09:06.200 "data_offset": 2048, 00:09:06.200 "data_size": 63488 00:09:06.200 }, 00:09:06.200 { 00:09:06.200 "name": "BaseBdev3", 00:09:06.200 "uuid": "578226ac-e9c8-4eb0-aa78-3f3cc513469f", 00:09:06.200 "is_configured": true, 00:09:06.200 "data_offset": 2048, 00:09:06.200 "data_size": 63488 00:09:06.200 } 00:09:06.200 ] 00:09:06.200 }' 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.200 04:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 [2024-11-21 04:55:23.329772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 [2024-11-21 04:55:23.397040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.770 [2024-11-21 04:55:23.397160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.770 [2024-11-21 04:55:23.408784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.770 [2024-11-21 04:55:23.408842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.770 [2024-11-21 04:55:23.408856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.770 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 [ 00:09:06.770 { 00:09:06.770 "name": "BaseBdev2", 00:09:06.770 "aliases": [ 00:09:06.770 "3e2cee21-8857-460c-94a5-22d6967d3aab" 00:09:07.031 ], 00:09:07.031 "product_name": "Malloc disk", 00:09:07.031 "block_size": 512, 00:09:07.031 "num_blocks": 65536, 00:09:07.031 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:07.031 "assigned_rate_limits": { 00:09:07.031 "rw_ios_per_sec": 0, 00:09:07.031 "rw_mbytes_per_sec": 0, 00:09:07.031 "r_mbytes_per_sec": 0, 00:09:07.031 "w_mbytes_per_sec": 0 00:09:07.031 }, 00:09:07.031 "claimed": false, 00:09:07.031 "zoned": false, 00:09:07.031 "supported_io_types": { 00:09:07.031 "read": true, 00:09:07.031 "write": true, 00:09:07.031 "unmap": true, 00:09:07.031 "flush": true, 00:09:07.031 "reset": true, 00:09:07.031 "nvme_admin": false, 00:09:07.031 "nvme_io": false, 00:09:07.031 "nvme_io_md": false, 00:09:07.031 "write_zeroes": true, 00:09:07.031 "zcopy": true, 00:09:07.031 "get_zone_info": false, 00:09:07.031 "zone_management": false, 00:09:07.031 "zone_append": false, 00:09:07.031 "compare": false, 00:09:07.031 "compare_and_write": false, 00:09:07.031 "abort": true, 00:09:07.031 "seek_hole": false, 00:09:07.031 "seek_data": false, 00:09:07.031 "copy": true, 00:09:07.031 "nvme_iov_md": false 00:09:07.031 }, 00:09:07.031 "memory_domains": [ 00:09:07.031 { 00:09:07.031 "dma_device_id": "system", 00:09:07.031 "dma_device_type": 1 00:09:07.031 }, 00:09:07.031 { 00:09:07.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.031 "dma_device_type": 2 00:09:07.031 } 00:09:07.031 ], 00:09:07.031 "driver_specific": {} 00:09:07.031 } 00:09:07.031 ] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.031 BaseBdev3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.031 [ 00:09:07.031 { 00:09:07.031 "name": "BaseBdev3", 00:09:07.031 "aliases": [ 00:09:07.031 "cc9a5046-a12e-40ce-a589-514ecffbf2d9" 00:09:07.031 ], 00:09:07.031 "product_name": "Malloc disk", 00:09:07.031 "block_size": 512, 00:09:07.031 "num_blocks": 65536, 00:09:07.031 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:07.031 "assigned_rate_limits": { 00:09:07.031 "rw_ios_per_sec": 0, 00:09:07.031 "rw_mbytes_per_sec": 0, 00:09:07.031 "r_mbytes_per_sec": 0, 00:09:07.031 "w_mbytes_per_sec": 0 00:09:07.031 }, 00:09:07.031 "claimed": false, 00:09:07.031 "zoned": false, 00:09:07.031 "supported_io_types": { 00:09:07.031 "read": true, 00:09:07.031 "write": true, 00:09:07.031 "unmap": true, 00:09:07.031 "flush": true, 00:09:07.031 "reset": true, 00:09:07.031 "nvme_admin": false, 00:09:07.031 "nvme_io": false, 00:09:07.031 "nvme_io_md": false, 00:09:07.031 "write_zeroes": true, 00:09:07.031 "zcopy": true, 00:09:07.031 "get_zone_info": false, 00:09:07.031 "zone_management": false, 00:09:07.031 "zone_append": false, 00:09:07.031 "compare": false, 00:09:07.031 "compare_and_write": false, 00:09:07.031 "abort": true, 00:09:07.031 "seek_hole": false, 00:09:07.031 "seek_data": false, 00:09:07.031 "copy": true, 00:09:07.031 "nvme_iov_md": false 00:09:07.031 }, 00:09:07.031 "memory_domains": [ 00:09:07.031 { 00:09:07.031 "dma_device_id": "system", 00:09:07.031 "dma_device_type": 1 00:09:07.031 }, 00:09:07.031 { 00:09:07.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.031 "dma_device_type": 2 00:09:07.031 } 00:09:07.031 ], 00:09:07.031 "driver_specific": {} 00:09:07.031 } 00:09:07.031 ] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.031 [2024-11-21 04:55:23.569236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.031 [2024-11-21 04:55:23.569284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.031 [2024-11-21 04:55:23.569318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.031 [2024-11-21 04:55:23.571049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.031 "name": "Existed_Raid", 00:09:07.031 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:07.031 "strip_size_kb": 0, 00:09:07.031 "state": "configuring", 00:09:07.031 "raid_level": "raid1", 00:09:07.031 "superblock": true, 00:09:07.031 "num_base_bdevs": 3, 00:09:07.031 "num_base_bdevs_discovered": 2, 00:09:07.031 "num_base_bdevs_operational": 3, 00:09:07.031 "base_bdevs_list": [ 00:09:07.031 { 00:09:07.031 "name": "BaseBdev1", 00:09:07.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.031 "is_configured": false, 00:09:07.031 "data_offset": 0, 00:09:07.031 "data_size": 0 00:09:07.031 }, 00:09:07.031 { 00:09:07.031 "name": "BaseBdev2", 00:09:07.031 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:07.031 "is_configured": true, 00:09:07.031 "data_offset": 2048, 00:09:07.031 "data_size": 63488 00:09:07.031 }, 00:09:07.031 { 00:09:07.031 "name": "BaseBdev3", 00:09:07.031 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:07.031 "is_configured": true, 00:09:07.031 "data_offset": 2048, 00:09:07.031 "data_size": 63488 00:09:07.031 } 00:09:07.031 ] 00:09:07.031 }' 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.031 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 [2024-11-21 04:55:23.968575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.291 04:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.292 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.292 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 04:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.551 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.551 "name": "Existed_Raid", 00:09:07.551 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:07.551 "strip_size_kb": 0, 00:09:07.551 "state": "configuring", 00:09:07.551 "raid_level": "raid1", 00:09:07.551 "superblock": true, 00:09:07.551 "num_base_bdevs": 3, 00:09:07.551 "num_base_bdevs_discovered": 1, 00:09:07.551 "num_base_bdevs_operational": 3, 00:09:07.551 "base_bdevs_list": [ 00:09:07.551 { 00:09:07.551 "name": "BaseBdev1", 00:09:07.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.551 "is_configured": false, 00:09:07.551 "data_offset": 0, 00:09:07.551 "data_size": 0 00:09:07.551 }, 00:09:07.551 { 00:09:07.551 "name": null, 00:09:07.551 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:07.551 "is_configured": false, 00:09:07.551 "data_offset": 0, 00:09:07.551 "data_size": 63488 00:09:07.551 }, 00:09:07.551 { 00:09:07.551 "name": "BaseBdev3", 00:09:07.551 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:07.551 "is_configured": true, 00:09:07.551 "data_offset": 2048, 00:09:07.551 "data_size": 63488 00:09:07.551 } 00:09:07.551 ] 00:09:07.551 }' 00:09:07.551 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.551 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.811 [2024-11-21 04:55:24.482620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.811 BaseBdev1 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.811 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.812 [ 00:09:07.812 { 00:09:07.812 "name": "BaseBdev1", 00:09:07.812 "aliases": [ 00:09:07.812 "12f77f2f-7fe7-4466-b060-8d5b07810c96" 00:09:07.812 ], 00:09:07.812 "product_name": "Malloc disk", 00:09:07.812 "block_size": 512, 00:09:07.812 "num_blocks": 65536, 00:09:07.812 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:07.812 "assigned_rate_limits": { 00:09:07.812 "rw_ios_per_sec": 0, 00:09:07.812 "rw_mbytes_per_sec": 0, 00:09:07.812 "r_mbytes_per_sec": 0, 00:09:07.812 "w_mbytes_per_sec": 0 00:09:07.812 }, 00:09:07.812 "claimed": true, 00:09:07.812 "claim_type": "exclusive_write", 00:09:07.812 "zoned": false, 00:09:07.812 "supported_io_types": { 00:09:07.812 "read": true, 00:09:07.812 "write": true, 00:09:07.812 "unmap": true, 00:09:07.812 "flush": true, 00:09:07.812 "reset": true, 00:09:07.812 "nvme_admin": false, 00:09:07.812 "nvme_io": false, 00:09:07.812 "nvme_io_md": false, 00:09:07.812 "write_zeroes": true, 00:09:07.812 "zcopy": true, 00:09:07.812 "get_zone_info": false, 00:09:07.812 "zone_management": false, 00:09:07.812 "zone_append": false, 00:09:07.812 "compare": false, 00:09:07.812 "compare_and_write": false, 00:09:07.812 "abort": true, 00:09:07.812 "seek_hole": false, 00:09:07.812 "seek_data": false, 00:09:07.812 "copy": true, 00:09:07.812 "nvme_iov_md": false 00:09:07.812 }, 00:09:07.812 "memory_domains": [ 00:09:07.812 { 00:09:07.812 "dma_device_id": "system", 00:09:07.812 "dma_device_type": 1 00:09:07.812 }, 00:09:07.812 { 00:09:07.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.812 "dma_device_type": 2 00:09:07.812 } 00:09:07.812 ], 00:09:07.812 "driver_specific": {} 00:09:07.812 } 00:09:07.812 ] 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.812 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.072 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.072 "name": "Existed_Raid", 00:09:08.072 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:08.072 "strip_size_kb": 0, 00:09:08.072 "state": "configuring", 00:09:08.072 "raid_level": "raid1", 00:09:08.072 "superblock": true, 00:09:08.072 "num_base_bdevs": 3, 00:09:08.072 "num_base_bdevs_discovered": 2, 00:09:08.072 "num_base_bdevs_operational": 3, 00:09:08.072 "base_bdevs_list": [ 00:09:08.072 { 00:09:08.072 "name": "BaseBdev1", 00:09:08.072 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:08.072 "is_configured": true, 00:09:08.072 "data_offset": 2048, 00:09:08.072 "data_size": 63488 00:09:08.072 }, 00:09:08.072 { 00:09:08.072 "name": null, 00:09:08.072 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:08.072 "is_configured": false, 00:09:08.072 "data_offset": 0, 00:09:08.072 "data_size": 63488 00:09:08.072 }, 00:09:08.072 { 00:09:08.072 "name": "BaseBdev3", 00:09:08.072 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:08.072 "is_configured": true, 00:09:08.072 "data_offset": 2048, 00:09:08.072 "data_size": 63488 00:09:08.072 } 00:09:08.072 ] 00:09:08.072 }' 00:09:08.072 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.072 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.331 [2024-11-21 04:55:24.941929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.331 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.332 "name": "Existed_Raid", 00:09:08.332 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:08.332 "strip_size_kb": 0, 00:09:08.332 "state": "configuring", 00:09:08.332 "raid_level": "raid1", 00:09:08.332 "superblock": true, 00:09:08.332 "num_base_bdevs": 3, 00:09:08.332 "num_base_bdevs_discovered": 1, 00:09:08.332 "num_base_bdevs_operational": 3, 00:09:08.332 "base_bdevs_list": [ 00:09:08.332 { 00:09:08.332 "name": "BaseBdev1", 00:09:08.332 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:08.332 "is_configured": true, 00:09:08.332 "data_offset": 2048, 00:09:08.332 "data_size": 63488 00:09:08.332 }, 00:09:08.332 { 00:09:08.332 "name": null, 00:09:08.332 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:08.332 "is_configured": false, 00:09:08.332 "data_offset": 0, 00:09:08.332 "data_size": 63488 00:09:08.332 }, 00:09:08.332 { 00:09:08.332 "name": null, 00:09:08.332 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:08.332 "is_configured": false, 00:09:08.332 "data_offset": 0, 00:09:08.332 "data_size": 63488 00:09:08.332 } 00:09:08.332 ] 00:09:08.332 }' 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.332 04:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.901 [2024-11-21 04:55:25.449058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.901 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.901 "name": "Existed_Raid", 00:09:08.901 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:08.901 "strip_size_kb": 0, 00:09:08.901 "state": "configuring", 00:09:08.901 "raid_level": "raid1", 00:09:08.901 "superblock": true, 00:09:08.901 "num_base_bdevs": 3, 00:09:08.901 "num_base_bdevs_discovered": 2, 00:09:08.901 "num_base_bdevs_operational": 3, 00:09:08.901 "base_bdevs_list": [ 00:09:08.901 { 00:09:08.901 "name": "BaseBdev1", 00:09:08.901 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:08.901 "is_configured": true, 00:09:08.901 "data_offset": 2048, 00:09:08.901 "data_size": 63488 00:09:08.901 }, 00:09:08.901 { 00:09:08.901 "name": null, 00:09:08.901 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:08.901 "is_configured": false, 00:09:08.901 "data_offset": 0, 00:09:08.901 "data_size": 63488 00:09:08.901 }, 00:09:08.901 { 00:09:08.902 "name": "BaseBdev3", 00:09:08.902 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:08.902 "is_configured": true, 00:09:08.902 "data_offset": 2048, 00:09:08.902 "data_size": 63488 00:09:08.902 } 00:09:08.902 ] 00:09:08.902 }' 00:09:08.902 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.902 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.472 [2024-11-21 04:55:25.964242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.472 04:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.472 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.472 "name": "Existed_Raid", 00:09:09.472 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:09.472 "strip_size_kb": 0, 00:09:09.472 "state": "configuring", 00:09:09.472 "raid_level": "raid1", 00:09:09.472 "superblock": true, 00:09:09.472 "num_base_bdevs": 3, 00:09:09.472 "num_base_bdevs_discovered": 1, 00:09:09.472 "num_base_bdevs_operational": 3, 00:09:09.472 "base_bdevs_list": [ 00:09:09.472 { 00:09:09.472 "name": null, 00:09:09.472 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:09.472 "is_configured": false, 00:09:09.472 "data_offset": 0, 00:09:09.472 "data_size": 63488 00:09:09.472 }, 00:09:09.472 { 00:09:09.472 "name": null, 00:09:09.472 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:09.472 "is_configured": false, 00:09:09.472 "data_offset": 0, 00:09:09.472 "data_size": 63488 00:09:09.472 }, 00:09:09.472 { 00:09:09.472 "name": "BaseBdev3", 00:09:09.472 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:09.472 "is_configured": true, 00:09:09.472 "data_offset": 2048, 00:09:09.472 "data_size": 63488 00:09:09.472 } 00:09:09.472 ] 00:09:09.472 }' 00:09:09.472 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.472 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.732 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.732 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.732 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.732 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.732 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.992 [2024-11-21 04:55:26.485427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.992 "name": "Existed_Raid", 00:09:09.992 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:09.992 "strip_size_kb": 0, 00:09:09.992 "state": "configuring", 00:09:09.992 "raid_level": "raid1", 00:09:09.992 "superblock": true, 00:09:09.992 "num_base_bdevs": 3, 00:09:09.992 "num_base_bdevs_discovered": 2, 00:09:09.992 "num_base_bdevs_operational": 3, 00:09:09.992 "base_bdevs_list": [ 00:09:09.992 { 00:09:09.992 "name": null, 00:09:09.992 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:09.992 "is_configured": false, 00:09:09.992 "data_offset": 0, 00:09:09.992 "data_size": 63488 00:09:09.992 }, 00:09:09.992 { 00:09:09.992 "name": "BaseBdev2", 00:09:09.992 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:09.992 "is_configured": true, 00:09:09.992 "data_offset": 2048, 00:09:09.992 "data_size": 63488 00:09:09.992 }, 00:09:09.992 { 00:09:09.992 "name": "BaseBdev3", 00:09:09.992 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:09.992 "is_configured": true, 00:09:09.992 "data_offset": 2048, 00:09:09.992 "data_size": 63488 00:09:09.992 } 00:09:09.992 ] 00:09:09.992 }' 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.992 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.252 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.253 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.513 04:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:10.513 04:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 12f77f2f-7fe7-4466-b060-8d5b07810c96 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.513 [2024-11-21 04:55:27.043286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:10.513 NewBaseBdev 00:09:10.513 [2024-11-21 04:55:27.043555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:10.513 [2024-11-21 04:55:27.043573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.513 [2024-11-21 04:55:27.043817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:10.513 [2024-11-21 04:55:27.043933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:10.513 [2024-11-21 04:55:27.043946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:10.513 [2024-11-21 04:55:27.044047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.513 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.513 [ 00:09:10.513 { 00:09:10.513 "name": "NewBaseBdev", 00:09:10.513 "aliases": [ 00:09:10.513 "12f77f2f-7fe7-4466-b060-8d5b07810c96" 00:09:10.513 ], 00:09:10.513 "product_name": "Malloc disk", 00:09:10.513 "block_size": 512, 00:09:10.513 "num_blocks": 65536, 00:09:10.513 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:10.513 "assigned_rate_limits": { 00:09:10.513 "rw_ios_per_sec": 0, 00:09:10.513 "rw_mbytes_per_sec": 0, 00:09:10.513 "r_mbytes_per_sec": 0, 00:09:10.513 "w_mbytes_per_sec": 0 00:09:10.513 }, 00:09:10.513 "claimed": true, 00:09:10.513 "claim_type": "exclusive_write", 00:09:10.513 "zoned": false, 00:09:10.513 "supported_io_types": { 00:09:10.513 "read": true, 00:09:10.513 "write": true, 00:09:10.513 "unmap": true, 00:09:10.513 "flush": true, 00:09:10.513 "reset": true, 00:09:10.513 "nvme_admin": false, 00:09:10.513 "nvme_io": false, 00:09:10.513 "nvme_io_md": false, 00:09:10.513 "write_zeroes": true, 00:09:10.513 "zcopy": true, 00:09:10.513 "get_zone_info": false, 00:09:10.513 "zone_management": false, 00:09:10.513 "zone_append": false, 00:09:10.513 "compare": false, 00:09:10.513 "compare_and_write": false, 00:09:10.513 "abort": true, 00:09:10.513 "seek_hole": false, 00:09:10.513 "seek_data": false, 00:09:10.513 "copy": true, 00:09:10.513 "nvme_iov_md": false 00:09:10.513 }, 00:09:10.513 "memory_domains": [ 00:09:10.513 { 00:09:10.513 "dma_device_id": "system", 00:09:10.514 "dma_device_type": 1 00:09:10.514 }, 00:09:10.514 { 00:09:10.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.514 "dma_device_type": 2 00:09:10.514 } 00:09:10.514 ], 00:09:10.514 "driver_specific": {} 00:09:10.514 } 00:09:10.514 ] 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.514 "name": "Existed_Raid", 00:09:10.514 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:10.514 "strip_size_kb": 0, 00:09:10.514 "state": "online", 00:09:10.514 "raid_level": "raid1", 00:09:10.514 "superblock": true, 00:09:10.514 "num_base_bdevs": 3, 00:09:10.514 "num_base_bdevs_discovered": 3, 00:09:10.514 "num_base_bdevs_operational": 3, 00:09:10.514 "base_bdevs_list": [ 00:09:10.514 { 00:09:10.514 "name": "NewBaseBdev", 00:09:10.514 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:10.514 "is_configured": true, 00:09:10.514 "data_offset": 2048, 00:09:10.514 "data_size": 63488 00:09:10.514 }, 00:09:10.514 { 00:09:10.514 "name": "BaseBdev2", 00:09:10.514 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:10.514 "is_configured": true, 00:09:10.514 "data_offset": 2048, 00:09:10.514 "data_size": 63488 00:09:10.514 }, 00:09:10.514 { 00:09:10.514 "name": "BaseBdev3", 00:09:10.514 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:10.514 "is_configured": true, 00:09:10.514 "data_offset": 2048, 00:09:10.514 "data_size": 63488 00:09:10.514 } 00:09:10.514 ] 00:09:10.514 }' 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.514 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.084 [2024-11-21 04:55:27.558773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.084 "name": "Existed_Raid", 00:09:11.084 "aliases": [ 00:09:11.084 "0e551fad-6a1f-49ac-bac9-3b3d87466650" 00:09:11.084 ], 00:09:11.084 "product_name": "Raid Volume", 00:09:11.084 "block_size": 512, 00:09:11.084 "num_blocks": 63488, 00:09:11.084 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:11.084 "assigned_rate_limits": { 00:09:11.084 "rw_ios_per_sec": 0, 00:09:11.084 "rw_mbytes_per_sec": 0, 00:09:11.084 "r_mbytes_per_sec": 0, 00:09:11.084 "w_mbytes_per_sec": 0 00:09:11.084 }, 00:09:11.084 "claimed": false, 00:09:11.084 "zoned": false, 00:09:11.084 "supported_io_types": { 00:09:11.084 "read": true, 00:09:11.084 "write": true, 00:09:11.084 "unmap": false, 00:09:11.084 "flush": false, 00:09:11.084 "reset": true, 00:09:11.084 "nvme_admin": false, 00:09:11.084 "nvme_io": false, 00:09:11.084 "nvme_io_md": false, 00:09:11.084 "write_zeroes": true, 00:09:11.084 "zcopy": false, 00:09:11.084 "get_zone_info": false, 00:09:11.084 "zone_management": false, 00:09:11.084 "zone_append": false, 00:09:11.084 "compare": false, 00:09:11.084 "compare_and_write": false, 00:09:11.084 "abort": false, 00:09:11.084 "seek_hole": false, 00:09:11.084 "seek_data": false, 00:09:11.084 "copy": false, 00:09:11.084 "nvme_iov_md": false 00:09:11.084 }, 00:09:11.084 "memory_domains": [ 00:09:11.084 { 00:09:11.084 "dma_device_id": "system", 00:09:11.084 "dma_device_type": 1 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.084 "dma_device_type": 2 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "dma_device_id": "system", 00:09:11.084 "dma_device_type": 1 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.084 "dma_device_type": 2 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "dma_device_id": "system", 00:09:11.084 "dma_device_type": 1 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.084 "dma_device_type": 2 00:09:11.084 } 00:09:11.084 ], 00:09:11.084 "driver_specific": { 00:09:11.084 "raid": { 00:09:11.084 "uuid": "0e551fad-6a1f-49ac-bac9-3b3d87466650", 00:09:11.084 "strip_size_kb": 0, 00:09:11.084 "state": "online", 00:09:11.084 "raid_level": "raid1", 00:09:11.084 "superblock": true, 00:09:11.084 "num_base_bdevs": 3, 00:09:11.084 "num_base_bdevs_discovered": 3, 00:09:11.084 "num_base_bdevs_operational": 3, 00:09:11.084 "base_bdevs_list": [ 00:09:11.084 { 00:09:11.084 "name": "NewBaseBdev", 00:09:11.084 "uuid": "12f77f2f-7fe7-4466-b060-8d5b07810c96", 00:09:11.084 "is_configured": true, 00:09:11.084 "data_offset": 2048, 00:09:11.084 "data_size": 63488 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "name": "BaseBdev2", 00:09:11.084 "uuid": "3e2cee21-8857-460c-94a5-22d6967d3aab", 00:09:11.084 "is_configured": true, 00:09:11.084 "data_offset": 2048, 00:09:11.084 "data_size": 63488 00:09:11.084 }, 00:09:11.084 { 00:09:11.084 "name": "BaseBdev3", 00:09:11.084 "uuid": "cc9a5046-a12e-40ce-a589-514ecffbf2d9", 00:09:11.084 "is_configured": true, 00:09:11.084 "data_offset": 2048, 00:09:11.084 "data_size": 63488 00:09:11.084 } 00:09:11.084 ] 00:09:11.084 } 00:09:11.084 } 00:09:11.084 }' 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:11.084 BaseBdev2 00:09:11.084 BaseBdev3' 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.084 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.346 [2024-11-21 04:55:27.825979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.346 [2024-11-21 04:55:27.826046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.346 [2024-11-21 04:55:27.826166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.346 [2024-11-21 04:55:27.826449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.346 [2024-11-21 04:55:27.826503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79224 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79224 ']' 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79224 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79224 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79224' 00:09:11.346 killing process with pid 79224 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79224 00:09:11.346 [2024-11-21 04:55:27.864859] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.346 04:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79224 00:09:11.346 [2024-11-21 04:55:27.895882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.606 04:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:11.606 00:09:11.606 real 0m8.900s 00:09:11.606 user 0m15.199s 00:09:11.606 sys 0m1.840s 00:09:11.606 04:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.606 04:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.606 ************************************ 00:09:11.606 END TEST raid_state_function_test_sb 00:09:11.606 ************************************ 00:09:11.606 04:55:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:11.606 04:55:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:11.606 04:55:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.606 04:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.606 ************************************ 00:09:11.606 START TEST raid_superblock_test 00:09:11.606 ************************************ 00:09:11.606 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:11.606 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79822 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79822 00:09:11.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79822 ']' 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.607 04:55:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.607 [2024-11-21 04:55:28.278959] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:11.607 [2024-11-21 04:55:28.279186] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79822 ] 00:09:11.866 [2024-11-21 04:55:28.450539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.867 [2024-11-21 04:55:28.475657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.867 [2024-11-21 04:55:28.518310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.867 [2024-11-21 04:55:28.518397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 malloc1 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 [2024-11-21 04:55:29.132727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:12.436 [2024-11-21 04:55:29.132804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.436 [2024-11-21 04:55:29.132824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:12.436 [2024-11-21 04:55:29.132837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.436 [2024-11-21 04:55:29.134886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.436 [2024-11-21 04:55:29.134936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:12.436 pt1 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 malloc2 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.436 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 [2024-11-21 04:55:29.161183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.436 [2024-11-21 04:55:29.161286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.436 [2024-11-21 04:55:29.161317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:12.437 [2024-11-21 04:55:29.161346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.437 [2024-11-21 04:55:29.163380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.437 [2024-11-21 04:55:29.163451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.437 pt2 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.437 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.697 malloc3 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.697 [2024-11-21 04:55:29.193563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.697 [2024-11-21 04:55:29.193654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.697 [2024-11-21 04:55:29.193688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:12.697 [2024-11-21 04:55:29.193717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.697 [2024-11-21 04:55:29.195744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.697 [2024-11-21 04:55:29.195816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.697 pt3 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.697 [2024-11-21 04:55:29.205557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:12.697 [2024-11-21 04:55:29.207391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.697 [2024-11-21 04:55:29.207448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.697 [2024-11-21 04:55:29.207584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:12.697 [2024-11-21 04:55:29.207596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.697 [2024-11-21 04:55:29.207845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:12.697 [2024-11-21 04:55:29.207978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:12.697 [2024-11-21 04:55:29.207990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:12.697 [2024-11-21 04:55:29.208128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.697 "name": "raid_bdev1", 00:09:12.697 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:12.697 "strip_size_kb": 0, 00:09:12.697 "state": "online", 00:09:12.697 "raid_level": "raid1", 00:09:12.697 "superblock": true, 00:09:12.697 "num_base_bdevs": 3, 00:09:12.697 "num_base_bdevs_discovered": 3, 00:09:12.697 "num_base_bdevs_operational": 3, 00:09:12.697 "base_bdevs_list": [ 00:09:12.697 { 00:09:12.697 "name": "pt1", 00:09:12.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.697 "is_configured": true, 00:09:12.697 "data_offset": 2048, 00:09:12.697 "data_size": 63488 00:09:12.697 }, 00:09:12.697 { 00:09:12.697 "name": "pt2", 00:09:12.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.697 "is_configured": true, 00:09:12.697 "data_offset": 2048, 00:09:12.697 "data_size": 63488 00:09:12.697 }, 00:09:12.697 { 00:09:12.697 "name": "pt3", 00:09:12.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.697 "is_configured": true, 00:09:12.697 "data_offset": 2048, 00:09:12.697 "data_size": 63488 00:09:12.697 } 00:09:12.697 ] 00:09:12.697 }' 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.697 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.958 [2024-11-21 04:55:29.617180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.958 "name": "raid_bdev1", 00:09:12.958 "aliases": [ 00:09:12.958 "8a65731d-a19d-49be-919a-e9f5d0ef35bd" 00:09:12.958 ], 00:09:12.958 "product_name": "Raid Volume", 00:09:12.958 "block_size": 512, 00:09:12.958 "num_blocks": 63488, 00:09:12.958 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:12.958 "assigned_rate_limits": { 00:09:12.958 "rw_ios_per_sec": 0, 00:09:12.958 "rw_mbytes_per_sec": 0, 00:09:12.958 "r_mbytes_per_sec": 0, 00:09:12.958 "w_mbytes_per_sec": 0 00:09:12.958 }, 00:09:12.958 "claimed": false, 00:09:12.958 "zoned": false, 00:09:12.958 "supported_io_types": { 00:09:12.958 "read": true, 00:09:12.958 "write": true, 00:09:12.958 "unmap": false, 00:09:12.958 "flush": false, 00:09:12.958 "reset": true, 00:09:12.958 "nvme_admin": false, 00:09:12.958 "nvme_io": false, 00:09:12.958 "nvme_io_md": false, 00:09:12.958 "write_zeroes": true, 00:09:12.958 "zcopy": false, 00:09:12.958 "get_zone_info": false, 00:09:12.958 "zone_management": false, 00:09:12.958 "zone_append": false, 00:09:12.958 "compare": false, 00:09:12.958 "compare_and_write": false, 00:09:12.958 "abort": false, 00:09:12.958 "seek_hole": false, 00:09:12.958 "seek_data": false, 00:09:12.958 "copy": false, 00:09:12.958 "nvme_iov_md": false 00:09:12.958 }, 00:09:12.958 "memory_domains": [ 00:09:12.958 { 00:09:12.958 "dma_device_id": "system", 00:09:12.958 "dma_device_type": 1 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.958 "dma_device_type": 2 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "dma_device_id": "system", 00:09:12.958 "dma_device_type": 1 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.958 "dma_device_type": 2 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "dma_device_id": "system", 00:09:12.958 "dma_device_type": 1 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.958 "dma_device_type": 2 00:09:12.958 } 00:09:12.958 ], 00:09:12.958 "driver_specific": { 00:09:12.958 "raid": { 00:09:12.958 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:12.958 "strip_size_kb": 0, 00:09:12.958 "state": "online", 00:09:12.958 "raid_level": "raid1", 00:09:12.958 "superblock": true, 00:09:12.958 "num_base_bdevs": 3, 00:09:12.958 "num_base_bdevs_discovered": 3, 00:09:12.958 "num_base_bdevs_operational": 3, 00:09:12.958 "base_bdevs_list": [ 00:09:12.958 { 00:09:12.958 "name": "pt1", 00:09:12.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.958 "is_configured": true, 00:09:12.958 "data_offset": 2048, 00:09:12.958 "data_size": 63488 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "name": "pt2", 00:09:12.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.958 "is_configured": true, 00:09:12.958 "data_offset": 2048, 00:09:12.958 "data_size": 63488 00:09:12.958 }, 00:09:12.958 { 00:09:12.958 "name": "pt3", 00:09:12.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.958 "is_configured": true, 00:09:12.958 "data_offset": 2048, 00:09:12.958 "data_size": 63488 00:09:12.958 } 00:09:12.958 ] 00:09:12.958 } 00:09:12.958 } 00:09:12.958 }' 00:09:12.958 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.219 pt2 00:09:13.219 pt3' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.219 [2024-11-21 04:55:29.896655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a65731d-a19d-49be-919a-e9f5d0ef35bd 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8a65731d-a19d-49be-919a-e9f5d0ef35bd ']' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.219 [2024-11-21 04:55:29.940281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.219 [2024-11-21 04:55:29.940343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.219 [2024-11-21 04:55:29.940446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.219 [2024-11-21 04:55:29.940557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.219 [2024-11-21 04:55:29.940605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:13.219 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.480 04:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.480 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 [2024-11-21 04:55:30.088038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:13.480 [2024-11-21 04:55:30.089950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:13.480 [2024-11-21 04:55:30.090041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:13.480 [2024-11-21 04:55:30.090126] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:13.480 [2024-11-21 04:55:30.090300] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:13.480 [2024-11-21 04:55:30.090362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:13.480 [2024-11-21 04:55:30.090420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.480 [2024-11-21 04:55:30.090461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:13.480 request: 00:09:13.480 { 00:09:13.481 "name": "raid_bdev1", 00:09:13.481 "raid_level": "raid1", 00:09:13.481 "base_bdevs": [ 00:09:13.481 "malloc1", 00:09:13.481 "malloc2", 00:09:13.481 "malloc3" 00:09:13.481 ], 00:09:13.481 "superblock": false, 00:09:13.481 "method": "bdev_raid_create", 00:09:13.481 "req_id": 1 00:09:13.481 } 00:09:13.481 Got JSON-RPC error response 00:09:13.481 response: 00:09:13.481 { 00:09:13.481 "code": -17, 00:09:13.481 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:13.481 } 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.481 [2024-11-21 04:55:30.147900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.481 [2024-11-21 04:55:30.148005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.481 [2024-11-21 04:55:30.148039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:13.481 [2024-11-21 04:55:30.148067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.481 [2024-11-21 04:55:30.150178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.481 [2024-11-21 04:55:30.150243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.481 [2024-11-21 04:55:30.150340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:13.481 [2024-11-21 04:55:30.150396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.481 pt1 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.481 "name": "raid_bdev1", 00:09:13.481 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:13.481 "strip_size_kb": 0, 00:09:13.481 "state": "configuring", 00:09:13.481 "raid_level": "raid1", 00:09:13.481 "superblock": true, 00:09:13.481 "num_base_bdevs": 3, 00:09:13.481 "num_base_bdevs_discovered": 1, 00:09:13.481 "num_base_bdevs_operational": 3, 00:09:13.481 "base_bdevs_list": [ 00:09:13.481 { 00:09:13.481 "name": "pt1", 00:09:13.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.481 "is_configured": true, 00:09:13.481 "data_offset": 2048, 00:09:13.481 "data_size": 63488 00:09:13.481 }, 00:09:13.481 { 00:09:13.481 "name": null, 00:09:13.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.481 "is_configured": false, 00:09:13.481 "data_offset": 2048, 00:09:13.481 "data_size": 63488 00:09:13.481 }, 00:09:13.481 { 00:09:13.481 "name": null, 00:09:13.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.481 "is_configured": false, 00:09:13.481 "data_offset": 2048, 00:09:13.481 "data_size": 63488 00:09:13.481 } 00:09:13.481 ] 00:09:13.481 }' 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.481 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.051 [2024-11-21 04:55:30.627135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.051 [2024-11-21 04:55:30.627258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.051 [2024-11-21 04:55:30.627295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:14.051 [2024-11-21 04:55:30.627327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.051 [2024-11-21 04:55:30.627807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.051 [2024-11-21 04:55:30.627842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.051 [2024-11-21 04:55:30.627916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.051 [2024-11-21 04:55:30.627942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.051 pt2 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.051 [2024-11-21 04:55:30.639109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.051 "name": "raid_bdev1", 00:09:14.051 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:14.051 "strip_size_kb": 0, 00:09:14.051 "state": "configuring", 00:09:14.051 "raid_level": "raid1", 00:09:14.051 "superblock": true, 00:09:14.051 "num_base_bdevs": 3, 00:09:14.051 "num_base_bdevs_discovered": 1, 00:09:14.051 "num_base_bdevs_operational": 3, 00:09:14.051 "base_bdevs_list": [ 00:09:14.051 { 00:09:14.051 "name": "pt1", 00:09:14.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.051 "is_configured": true, 00:09:14.051 "data_offset": 2048, 00:09:14.051 "data_size": 63488 00:09:14.051 }, 00:09:14.051 { 00:09:14.051 "name": null, 00:09:14.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.051 "is_configured": false, 00:09:14.051 "data_offset": 0, 00:09:14.051 "data_size": 63488 00:09:14.051 }, 00:09:14.051 { 00:09:14.051 "name": null, 00:09:14.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.051 "is_configured": false, 00:09:14.051 "data_offset": 2048, 00:09:14.051 "data_size": 63488 00:09:14.051 } 00:09:14.051 ] 00:09:14.051 }' 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.051 04:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 [2024-11-21 04:55:31.070407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.622 [2024-11-21 04:55:31.070486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.622 [2024-11-21 04:55:31.070506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:14.622 [2024-11-21 04:55:31.070515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.622 [2024-11-21 04:55:31.070906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.622 [2024-11-21 04:55:31.070923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.622 [2024-11-21 04:55:31.070996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.622 [2024-11-21 04:55:31.071022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.622 pt2 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 [2024-11-21 04:55:31.082338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.622 [2024-11-21 04:55:31.082384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.622 [2024-11-21 04:55:31.082402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:14.622 [2024-11-21 04:55:31.082410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.622 [2024-11-21 04:55:31.082740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.622 [2024-11-21 04:55:31.082755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.622 [2024-11-21 04:55:31.082815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:14.622 [2024-11-21 04:55:31.082832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.622 [2024-11-21 04:55:31.082929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:14.622 [2024-11-21 04:55:31.082939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:14.622 [2024-11-21 04:55:31.083160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:14.622 [2024-11-21 04:55:31.083300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:14.622 [2024-11-21 04:55:31.083314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:14.622 [2024-11-21 04:55:31.083411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.622 pt3 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.622 "name": "raid_bdev1", 00:09:14.622 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:14.622 "strip_size_kb": 0, 00:09:14.622 "state": "online", 00:09:14.622 "raid_level": "raid1", 00:09:14.622 "superblock": true, 00:09:14.622 "num_base_bdevs": 3, 00:09:14.622 "num_base_bdevs_discovered": 3, 00:09:14.622 "num_base_bdevs_operational": 3, 00:09:14.622 "base_bdevs_list": [ 00:09:14.622 { 00:09:14.622 "name": "pt1", 00:09:14.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.622 "is_configured": true, 00:09:14.622 "data_offset": 2048, 00:09:14.622 "data_size": 63488 00:09:14.622 }, 00:09:14.622 { 00:09:14.622 "name": "pt2", 00:09:14.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.622 "is_configured": true, 00:09:14.622 "data_offset": 2048, 00:09:14.622 "data_size": 63488 00:09:14.622 }, 00:09:14.622 { 00:09:14.622 "name": "pt3", 00:09:14.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.622 "is_configured": true, 00:09:14.622 "data_offset": 2048, 00:09:14.622 "data_size": 63488 00:09:14.622 } 00:09:14.622 ] 00:09:14.622 }' 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.622 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.882 [2024-11-21 04:55:31.517891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.882 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.882 "name": "raid_bdev1", 00:09:14.882 "aliases": [ 00:09:14.882 "8a65731d-a19d-49be-919a-e9f5d0ef35bd" 00:09:14.882 ], 00:09:14.882 "product_name": "Raid Volume", 00:09:14.882 "block_size": 512, 00:09:14.882 "num_blocks": 63488, 00:09:14.882 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:14.882 "assigned_rate_limits": { 00:09:14.882 "rw_ios_per_sec": 0, 00:09:14.882 "rw_mbytes_per_sec": 0, 00:09:14.882 "r_mbytes_per_sec": 0, 00:09:14.882 "w_mbytes_per_sec": 0 00:09:14.882 }, 00:09:14.882 "claimed": false, 00:09:14.882 "zoned": false, 00:09:14.882 "supported_io_types": { 00:09:14.882 "read": true, 00:09:14.882 "write": true, 00:09:14.882 "unmap": false, 00:09:14.882 "flush": false, 00:09:14.882 "reset": true, 00:09:14.882 "nvme_admin": false, 00:09:14.882 "nvme_io": false, 00:09:14.882 "nvme_io_md": false, 00:09:14.882 "write_zeroes": true, 00:09:14.882 "zcopy": false, 00:09:14.882 "get_zone_info": false, 00:09:14.882 "zone_management": false, 00:09:14.882 "zone_append": false, 00:09:14.882 "compare": false, 00:09:14.882 "compare_and_write": false, 00:09:14.882 "abort": false, 00:09:14.882 "seek_hole": false, 00:09:14.882 "seek_data": false, 00:09:14.882 "copy": false, 00:09:14.882 "nvme_iov_md": false 00:09:14.882 }, 00:09:14.882 "memory_domains": [ 00:09:14.882 { 00:09:14.883 "dma_device_id": "system", 00:09:14.883 "dma_device_type": 1 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.883 "dma_device_type": 2 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "dma_device_id": "system", 00:09:14.883 "dma_device_type": 1 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.883 "dma_device_type": 2 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "dma_device_id": "system", 00:09:14.883 "dma_device_type": 1 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.883 "dma_device_type": 2 00:09:14.883 } 00:09:14.883 ], 00:09:14.883 "driver_specific": { 00:09:14.883 "raid": { 00:09:14.883 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:14.883 "strip_size_kb": 0, 00:09:14.883 "state": "online", 00:09:14.883 "raid_level": "raid1", 00:09:14.883 "superblock": true, 00:09:14.883 "num_base_bdevs": 3, 00:09:14.883 "num_base_bdevs_discovered": 3, 00:09:14.883 "num_base_bdevs_operational": 3, 00:09:14.883 "base_bdevs_list": [ 00:09:14.883 { 00:09:14.883 "name": "pt1", 00:09:14.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.883 "is_configured": true, 00:09:14.883 "data_offset": 2048, 00:09:14.883 "data_size": 63488 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "name": "pt2", 00:09:14.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.883 "is_configured": true, 00:09:14.883 "data_offset": 2048, 00:09:14.883 "data_size": 63488 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "name": "pt3", 00:09:14.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.883 "is_configured": true, 00:09:14.883 "data_offset": 2048, 00:09:14.883 "data_size": 63488 00:09:14.883 } 00:09:14.883 ] 00:09:14.883 } 00:09:14.883 } 00:09:14.883 }' 00:09:14.883 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.883 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:14.883 pt2 00:09:14.883 pt3' 00:09:14.883 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 [2024-11-21 04:55:31.789456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8a65731d-a19d-49be-919a-e9f5d0ef35bd '!=' 8a65731d-a19d-49be-919a-e9f5d0ef35bd ']' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 [2024-11-21 04:55:31.833119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.143 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.404 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.404 "name": "raid_bdev1", 00:09:15.404 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:15.404 "strip_size_kb": 0, 00:09:15.404 "state": "online", 00:09:15.404 "raid_level": "raid1", 00:09:15.404 "superblock": true, 00:09:15.404 "num_base_bdevs": 3, 00:09:15.404 "num_base_bdevs_discovered": 2, 00:09:15.404 "num_base_bdevs_operational": 2, 00:09:15.404 "base_bdevs_list": [ 00:09:15.404 { 00:09:15.404 "name": null, 00:09:15.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.404 "is_configured": false, 00:09:15.404 "data_offset": 0, 00:09:15.404 "data_size": 63488 00:09:15.404 }, 00:09:15.404 { 00:09:15.404 "name": "pt2", 00:09:15.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.404 "is_configured": true, 00:09:15.404 "data_offset": 2048, 00:09:15.404 "data_size": 63488 00:09:15.404 }, 00:09:15.404 { 00:09:15.404 "name": "pt3", 00:09:15.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.404 "is_configured": true, 00:09:15.404 "data_offset": 2048, 00:09:15.404 "data_size": 63488 00:09:15.404 } 00:09:15.404 ] 00:09:15.404 }' 00:09:15.404 04:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.404 04:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 [2024-11-21 04:55:32.272297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.665 [2024-11-21 04:55:32.272365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.665 [2024-11-21 04:55:32.272486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.665 [2024-11-21 04:55:32.272593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.665 [2024-11-21 04:55:32.272647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 [2024-11-21 04:55:32.340193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.665 [2024-11-21 04:55:32.340243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.665 [2024-11-21 04:55:32.340261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:15.665 [2024-11-21 04:55:32.340270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.665 [2024-11-21 04:55:32.342388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.665 [2024-11-21 04:55:32.342422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.665 [2024-11-21 04:55:32.342489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:15.665 [2024-11-21 04:55:32.342519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.665 pt2 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.665 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.925 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.925 "name": "raid_bdev1", 00:09:15.925 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:15.925 "strip_size_kb": 0, 00:09:15.925 "state": "configuring", 00:09:15.925 "raid_level": "raid1", 00:09:15.925 "superblock": true, 00:09:15.925 "num_base_bdevs": 3, 00:09:15.925 "num_base_bdevs_discovered": 1, 00:09:15.925 "num_base_bdevs_operational": 2, 00:09:15.925 "base_bdevs_list": [ 00:09:15.925 { 00:09:15.925 "name": null, 00:09:15.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.925 "is_configured": false, 00:09:15.925 "data_offset": 2048, 00:09:15.925 "data_size": 63488 00:09:15.925 }, 00:09:15.925 { 00:09:15.925 "name": "pt2", 00:09:15.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.925 "is_configured": true, 00:09:15.925 "data_offset": 2048, 00:09:15.925 "data_size": 63488 00:09:15.925 }, 00:09:15.925 { 00:09:15.925 "name": null, 00:09:15.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.925 "is_configured": false, 00:09:15.925 "data_offset": 2048, 00:09:15.925 "data_size": 63488 00:09:15.925 } 00:09:15.925 ] 00:09:15.925 }' 00:09:15.925 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.925 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.186 [2024-11-21 04:55:32.751522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.186 [2024-11-21 04:55:32.751584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.186 [2024-11-21 04:55:32.751607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:16.186 [2024-11-21 04:55:32.751616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.186 [2024-11-21 04:55:32.752033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.186 [2024-11-21 04:55:32.752059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.186 [2024-11-21 04:55:32.752155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:16.186 [2024-11-21 04:55:32.752187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.186 [2024-11-21 04:55:32.752322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:16.186 [2024-11-21 04:55:32.752335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:16.186 [2024-11-21 04:55:32.752585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.186 [2024-11-21 04:55:32.752719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:16.186 [2024-11-21 04:55:32.752738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:16.186 [2024-11-21 04:55:32.752855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.186 pt3 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.186 "name": "raid_bdev1", 00:09:16.186 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:16.186 "strip_size_kb": 0, 00:09:16.186 "state": "online", 00:09:16.186 "raid_level": "raid1", 00:09:16.186 "superblock": true, 00:09:16.186 "num_base_bdevs": 3, 00:09:16.186 "num_base_bdevs_discovered": 2, 00:09:16.186 "num_base_bdevs_operational": 2, 00:09:16.186 "base_bdevs_list": [ 00:09:16.186 { 00:09:16.186 "name": null, 00:09:16.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.186 "is_configured": false, 00:09:16.186 "data_offset": 2048, 00:09:16.186 "data_size": 63488 00:09:16.186 }, 00:09:16.186 { 00:09:16.186 "name": "pt2", 00:09:16.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.186 "is_configured": true, 00:09:16.186 "data_offset": 2048, 00:09:16.186 "data_size": 63488 00:09:16.186 }, 00:09:16.186 { 00:09:16.186 "name": "pt3", 00:09:16.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.186 "is_configured": true, 00:09:16.186 "data_offset": 2048, 00:09:16.186 "data_size": 63488 00:09:16.186 } 00:09:16.186 ] 00:09:16.186 }' 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.186 04:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 [2024-11-21 04:55:33.214693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.756 [2024-11-21 04:55:33.214725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.756 [2024-11-21 04:55:33.214807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.756 [2024-11-21 04:55:33.214899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.756 [2024-11-21 04:55:33.214917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 [2024-11-21 04:55:33.282549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.756 [2024-11-21 04:55:33.282610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.756 [2024-11-21 04:55:33.282626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:16.756 [2024-11-21 04:55:33.282637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.756 [2024-11-21 04:55:33.284942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.756 [2024-11-21 04:55:33.284983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.756 [2024-11-21 04:55:33.285053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:16.756 [2024-11-21 04:55:33.285105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.756 [2024-11-21 04:55:33.285219] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:16.756 [2024-11-21 04:55:33.285241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.756 [2024-11-21 04:55:33.285263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:16.756 [2024-11-21 04:55:33.285305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.756 pt1 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.756 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.756 "name": "raid_bdev1", 00:09:16.756 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:16.756 "strip_size_kb": 0, 00:09:16.756 "state": "configuring", 00:09:16.756 "raid_level": "raid1", 00:09:16.756 "superblock": true, 00:09:16.756 "num_base_bdevs": 3, 00:09:16.756 "num_base_bdevs_discovered": 1, 00:09:16.756 "num_base_bdevs_operational": 2, 00:09:16.756 "base_bdevs_list": [ 00:09:16.756 { 00:09:16.756 "name": null, 00:09:16.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.756 "is_configured": false, 00:09:16.756 "data_offset": 2048, 00:09:16.756 "data_size": 63488 00:09:16.756 }, 00:09:16.756 { 00:09:16.756 "name": "pt2", 00:09:16.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.756 "is_configured": true, 00:09:16.756 "data_offset": 2048, 00:09:16.756 "data_size": 63488 00:09:16.756 }, 00:09:16.756 { 00:09:16.757 "name": null, 00:09:16.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.757 "is_configured": false, 00:09:16.757 "data_offset": 2048, 00:09:16.757 "data_size": 63488 00:09:16.757 } 00:09:16.757 ] 00:09:16.757 }' 00:09:16.757 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.757 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 [2024-11-21 04:55:33.721777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.017 [2024-11-21 04:55:33.721834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.017 [2024-11-21 04:55:33.721850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:17.017 [2024-11-21 04:55:33.721861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.017 [2024-11-21 04:55:33.722280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.017 [2024-11-21 04:55:33.722315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.017 [2024-11-21 04:55:33.722388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:17.017 [2024-11-21 04:55:33.722464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.017 [2024-11-21 04:55:33.722595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:17.017 [2024-11-21 04:55:33.722615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.017 [2024-11-21 04:55:33.722873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:17.017 [2024-11-21 04:55:33.723027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:17.017 [2024-11-21 04:55:33.723046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:17.017 [2024-11-21 04:55:33.723178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.017 pt3 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.017 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.277 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.277 "name": "raid_bdev1", 00:09:17.277 "uuid": "8a65731d-a19d-49be-919a-e9f5d0ef35bd", 00:09:17.277 "strip_size_kb": 0, 00:09:17.277 "state": "online", 00:09:17.277 "raid_level": "raid1", 00:09:17.277 "superblock": true, 00:09:17.277 "num_base_bdevs": 3, 00:09:17.277 "num_base_bdevs_discovered": 2, 00:09:17.277 "num_base_bdevs_operational": 2, 00:09:17.277 "base_bdevs_list": [ 00:09:17.277 { 00:09:17.277 "name": null, 00:09:17.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.277 "is_configured": false, 00:09:17.277 "data_offset": 2048, 00:09:17.277 "data_size": 63488 00:09:17.277 }, 00:09:17.277 { 00:09:17.277 "name": "pt2", 00:09:17.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.277 "is_configured": true, 00:09:17.277 "data_offset": 2048, 00:09:17.277 "data_size": 63488 00:09:17.277 }, 00:09:17.277 { 00:09:17.277 "name": "pt3", 00:09:17.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.277 "is_configured": true, 00:09:17.277 "data_offset": 2048, 00:09:17.277 "data_size": 63488 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 }' 00:09:17.277 04:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.277 04:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.538 [2024-11-21 04:55:34.221238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8a65731d-a19d-49be-919a-e9f5d0ef35bd '!=' 8a65731d-a19d-49be-919a-e9f5d0ef35bd ']' 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79822 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79822 ']' 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79822 00:09:17.538 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:17.798 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79822 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.799 killing process with pid 79822 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79822' 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79822 00:09:17.799 [2024-11-21 04:55:34.295792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.799 [2024-11-21 04:55:34.295901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.799 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79822 00:09:17.799 [2024-11-21 04:55:34.295982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.799 [2024-11-21 04:55:34.295993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:17.799 [2024-11-21 04:55:34.329713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.059 04:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:18.059 00:09:18.059 real 0m6.349s 00:09:18.059 user 0m10.634s 00:09:18.059 sys 0m1.325s 00:09:18.059 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.059 04:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.059 ************************************ 00:09:18.059 END TEST raid_superblock_test 00:09:18.059 ************************************ 00:09:18.059 04:55:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:18.059 04:55:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.059 04:55:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.059 04:55:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.059 ************************************ 00:09:18.059 START TEST raid_read_error_test 00:09:18.059 ************************************ 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5fZ23GYmyN 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80257 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80257 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 80257 ']' 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.059 04:55:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.059 [2024-11-21 04:55:34.725659] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:18.059 [2024-11-21 04:55:34.725806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80257 ] 00:09:18.327 [2024-11-21 04:55:34.901873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.327 [2024-11-21 04:55:34.927498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.327 [2024-11-21 04:55:34.968962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.327 [2024-11-21 04:55:34.969007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 BaseBdev1_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 true 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 [2024-11-21 04:55:35.582673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.898 [2024-11-21 04:55:35.582724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.898 [2024-11-21 04:55:35.582749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.898 [2024-11-21 04:55:35.582765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.898 [2024-11-21 04:55:35.584913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.898 [2024-11-21 04:55:35.584949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.898 BaseBdev1 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 BaseBdev2_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 true 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.898 [2024-11-21 04:55:35.623363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.898 [2024-11-21 04:55:35.623411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.898 [2024-11-21 04:55:35.623430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.898 [2024-11-21 04:55:35.623439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.898 [2024-11-21 04:55:35.625484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.898 [2024-11-21 04:55:35.625521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.898 BaseBdev2 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.898 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.158 BaseBdev3_malloc 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.158 true 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.158 [2024-11-21 04:55:35.663757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:19.158 [2024-11-21 04:55:35.663813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.158 [2024-11-21 04:55:35.663854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:19.158 [2024-11-21 04:55:35.663866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.158 [2024-11-21 04:55:35.666053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.158 [2024-11-21 04:55:35.666098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:19.158 BaseBdev3 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.158 [2024-11-21 04:55:35.675770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.158 [2024-11-21 04:55:35.677829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.158 [2024-11-21 04:55:35.677912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.158 [2024-11-21 04:55:35.678113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:19.158 [2024-11-21 04:55:35.678130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.158 [2024-11-21 04:55:35.678382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:19.158 [2024-11-21 04:55:35.678566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:19.158 [2024-11-21 04:55:35.678591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:19.158 [2024-11-21 04:55:35.678741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.158 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.158 "name": "raid_bdev1", 00:09:19.158 "uuid": "1a9cddf5-a893-4f39-8277-da0933031727", 00:09:19.158 "strip_size_kb": 0, 00:09:19.158 "state": "online", 00:09:19.158 "raid_level": "raid1", 00:09:19.159 "superblock": true, 00:09:19.159 "num_base_bdevs": 3, 00:09:19.159 "num_base_bdevs_discovered": 3, 00:09:19.159 "num_base_bdevs_operational": 3, 00:09:19.159 "base_bdevs_list": [ 00:09:19.159 { 00:09:19.159 "name": "BaseBdev1", 00:09:19.159 "uuid": "65f98867-f90b-52ac-8d63-55f980fcd55c", 00:09:19.159 "is_configured": true, 00:09:19.159 "data_offset": 2048, 00:09:19.159 "data_size": 63488 00:09:19.159 }, 00:09:19.159 { 00:09:19.159 "name": "BaseBdev2", 00:09:19.159 "uuid": "bd434b54-2222-5a74-bfc2-096c8cc13c7e", 00:09:19.159 "is_configured": true, 00:09:19.159 "data_offset": 2048, 00:09:19.159 "data_size": 63488 00:09:19.159 }, 00:09:19.159 { 00:09:19.159 "name": "BaseBdev3", 00:09:19.159 "uuid": "5b44a227-82ab-58c1-84b8-40020f609c76", 00:09:19.159 "is_configured": true, 00:09:19.159 "data_offset": 2048, 00:09:19.159 "data_size": 63488 00:09:19.159 } 00:09:19.159 ] 00:09:19.159 }' 00:09:19.159 04:55:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.159 04:55:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.419 04:55:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.419 04:55:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:19.679 [2024-11-21 04:55:36.211263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.620 "name": "raid_bdev1", 00:09:20.620 "uuid": "1a9cddf5-a893-4f39-8277-da0933031727", 00:09:20.620 "strip_size_kb": 0, 00:09:20.620 "state": "online", 00:09:20.620 "raid_level": "raid1", 00:09:20.620 "superblock": true, 00:09:20.620 "num_base_bdevs": 3, 00:09:20.620 "num_base_bdevs_discovered": 3, 00:09:20.620 "num_base_bdevs_operational": 3, 00:09:20.620 "base_bdevs_list": [ 00:09:20.620 { 00:09:20.620 "name": "BaseBdev1", 00:09:20.620 "uuid": "65f98867-f90b-52ac-8d63-55f980fcd55c", 00:09:20.620 "is_configured": true, 00:09:20.620 "data_offset": 2048, 00:09:20.620 "data_size": 63488 00:09:20.620 }, 00:09:20.620 { 00:09:20.620 "name": "BaseBdev2", 00:09:20.620 "uuid": "bd434b54-2222-5a74-bfc2-096c8cc13c7e", 00:09:20.620 "is_configured": true, 00:09:20.620 "data_offset": 2048, 00:09:20.620 "data_size": 63488 00:09:20.620 }, 00:09:20.620 { 00:09:20.620 "name": "BaseBdev3", 00:09:20.620 "uuid": "5b44a227-82ab-58c1-84b8-40020f609c76", 00:09:20.620 "is_configured": true, 00:09:20.620 "data_offset": 2048, 00:09:20.620 "data_size": 63488 00:09:20.620 } 00:09:20.620 ] 00:09:20.620 }' 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.620 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.890 [2024-11-21 04:55:37.597821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.890 [2024-11-21 04:55:37.597868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.890 [2024-11-21 04:55:37.600884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.890 [2024-11-21 04:55:37.601048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.890 [2024-11-21 04:55:37.601233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.890 [2024-11-21 04:55:37.601254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:20.890 { 00:09:20.890 "results": [ 00:09:20.890 { 00:09:20.890 "job": "raid_bdev1", 00:09:20.890 "core_mask": "0x1", 00:09:20.890 "workload": "randrw", 00:09:20.890 "percentage": 50, 00:09:20.890 "status": "finished", 00:09:20.890 "queue_depth": 1, 00:09:20.890 "io_size": 131072, 00:09:20.890 "runtime": 1.38741, 00:09:20.890 "iops": 14418.232534002205, 00:09:20.890 "mibps": 1802.2790667502757, 00:09:20.890 "io_failed": 0, 00:09:20.890 "io_timeout": 0, 00:09:20.890 "avg_latency_us": 66.80391642195579, 00:09:20.890 "min_latency_us": 21.463755458515283, 00:09:20.890 "max_latency_us": 1423.7624454148472 00:09:20.890 } 00:09:20.890 ], 00:09:20.890 "core_count": 1 00:09:20.890 } 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80257 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 80257 ']' 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 80257 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.890 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80257 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.150 killing process with pid 80257 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80257' 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 80257 00:09:21.150 [2024-11-21 04:55:37.646674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 80257 00:09:21.150 [2024-11-21 04:55:37.673031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5fZ23GYmyN 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.150 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:21.409 00:09:21.409 real 0m3.276s 00:09:21.409 user 0m4.151s 00:09:21.409 sys 0m0.546s 00:09:21.409 ************************************ 00:09:21.409 END TEST raid_read_error_test 00:09:21.409 ************************************ 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.409 04:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.409 04:55:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:21.409 04:55:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.409 04:55:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.409 04:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.409 ************************************ 00:09:21.409 START TEST raid_write_error_test 00:09:21.409 ************************************ 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yhuvCJVMNT 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80386 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80386 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80386 ']' 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.409 04:55:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.409 [2024-11-21 04:55:38.075042] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:21.409 [2024-11-21 04:55:38.075187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80386 ] 00:09:21.669 [2024-11-21 04:55:38.231750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.669 [2024-11-21 04:55:38.261436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.669 [2024-11-21 04:55:38.304041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.669 [2024-11-21 04:55:38.304207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.238 BaseBdev1_malloc 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.238 true 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.238 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.239 [2024-11-21 04:55:38.945983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.239 [2024-11-21 04:55:38.946126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.239 [2024-11-21 04:55:38.946169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.239 [2024-11-21 04:55:38.946178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.239 [2024-11-21 04:55:38.948314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.239 [2024-11-21 04:55:38.948349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.239 BaseBdev1 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.239 BaseBdev2_malloc 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.239 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 true 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 [2024-11-21 04:55:38.986272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.499 [2024-11-21 04:55:38.986319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.499 [2024-11-21 04:55:38.986337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.499 [2024-11-21 04:55:38.986345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.499 [2024-11-21 04:55:38.988392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.499 [2024-11-21 04:55:38.988433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.499 BaseBdev2 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 BaseBdev3_malloc 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 true 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 [2024-11-21 04:55:39.026565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.499 [2024-11-21 04:55:39.026614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.499 [2024-11-21 04:55:39.026649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.499 [2024-11-21 04:55:39.026659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.499 [2024-11-21 04:55:39.028851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.499 [2024-11-21 04:55:39.028888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.499 BaseBdev3 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 [2024-11-21 04:55:39.038603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.499 [2024-11-21 04:55:39.040518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.499 [2024-11-21 04:55:39.040591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.499 [2024-11-21 04:55:39.040763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:22.499 [2024-11-21 04:55:39.040777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.499 [2024-11-21 04:55:39.040978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:22.499 [2024-11-21 04:55:39.041139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:22.499 [2024-11-21 04:55:39.041150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:22.499 [2024-11-21 04:55:39.041285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.499 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.499 "name": "raid_bdev1", 00:09:22.499 "uuid": "f8785589-26b3-40a3-bf50-deaa76143838", 00:09:22.499 "strip_size_kb": 0, 00:09:22.499 "state": "online", 00:09:22.499 "raid_level": "raid1", 00:09:22.499 "superblock": true, 00:09:22.499 "num_base_bdevs": 3, 00:09:22.499 "num_base_bdevs_discovered": 3, 00:09:22.499 "num_base_bdevs_operational": 3, 00:09:22.499 "base_bdevs_list": [ 00:09:22.499 { 00:09:22.499 "name": "BaseBdev1", 00:09:22.499 "uuid": "5fe2eca3-267a-5a35-ae4c-3f37f20cd87e", 00:09:22.499 "is_configured": true, 00:09:22.499 "data_offset": 2048, 00:09:22.499 "data_size": 63488 00:09:22.499 }, 00:09:22.499 { 00:09:22.499 "name": "BaseBdev2", 00:09:22.499 "uuid": "40ce3e11-64fb-592c-8bcd-8acea45320b8", 00:09:22.499 "is_configured": true, 00:09:22.499 "data_offset": 2048, 00:09:22.499 "data_size": 63488 00:09:22.499 }, 00:09:22.499 { 00:09:22.499 "name": "BaseBdev3", 00:09:22.499 "uuid": "80ae00bd-eb6a-543d-aaff-7ab7ce266d1c", 00:09:22.499 "is_configured": true, 00:09:22.499 "data_offset": 2048, 00:09:22.499 "data_size": 63488 00:09:22.499 } 00:09:22.499 ] 00:09:22.499 }' 00:09:22.500 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.500 04:55:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.758 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.758 04:55:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.018 [2024-11-21 04:55:39.542055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.961 [2024-11-21 04:55:40.461701] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:23.961 [2024-11-21 04:55:40.461874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.961 [2024-11-21 04:55:40.462136] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006560 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.961 "name": "raid_bdev1", 00:09:23.961 "uuid": "f8785589-26b3-40a3-bf50-deaa76143838", 00:09:23.961 "strip_size_kb": 0, 00:09:23.961 "state": "online", 00:09:23.961 "raid_level": "raid1", 00:09:23.961 "superblock": true, 00:09:23.961 "num_base_bdevs": 3, 00:09:23.961 "num_base_bdevs_discovered": 2, 00:09:23.961 "num_base_bdevs_operational": 2, 00:09:23.961 "base_bdevs_list": [ 00:09:23.961 { 00:09:23.961 "name": null, 00:09:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.961 "is_configured": false, 00:09:23.961 "data_offset": 0, 00:09:23.961 "data_size": 63488 00:09:23.961 }, 00:09:23.961 { 00:09:23.961 "name": "BaseBdev2", 00:09:23.961 "uuid": "40ce3e11-64fb-592c-8bcd-8acea45320b8", 00:09:23.961 "is_configured": true, 00:09:23.961 "data_offset": 2048, 00:09:23.961 "data_size": 63488 00:09:23.961 }, 00:09:23.961 { 00:09:23.961 "name": "BaseBdev3", 00:09:23.961 "uuid": "80ae00bd-eb6a-543d-aaff-7ab7ce266d1c", 00:09:23.961 "is_configured": true, 00:09:23.961 "data_offset": 2048, 00:09:23.961 "data_size": 63488 00:09:23.961 } 00:09:23.961 ] 00:09:23.961 }' 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.961 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.221 [2024-11-21 04:55:40.948013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.221 [2024-11-21 04:55:40.948154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.221 [2024-11-21 04:55:40.950756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.221 [2024-11-21 04:55:40.950808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.221 [2024-11-21 04:55:40.950893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.221 [2024-11-21 04:55:40.950903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:24.221 { 00:09:24.221 "results": [ 00:09:24.221 { 00:09:24.221 "job": "raid_bdev1", 00:09:24.221 "core_mask": "0x1", 00:09:24.221 "workload": "randrw", 00:09:24.221 "percentage": 50, 00:09:24.221 "status": "finished", 00:09:24.221 "queue_depth": 1, 00:09:24.221 "io_size": 131072, 00:09:24.221 "runtime": 1.406931, 00:09:24.221 "iops": 16581.48125245659, 00:09:24.221 "mibps": 2072.6851565570737, 00:09:24.221 "io_failed": 0, 00:09:24.221 "io_timeout": 0, 00:09:24.221 "avg_latency_us": 57.79596622529336, 00:09:24.221 "min_latency_us": 21.687336244541484, 00:09:24.221 "max_latency_us": 1330.7528384279476 00:09:24.221 } 00:09:24.221 ], 00:09:24.221 "core_count": 1 00:09:24.221 } 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80386 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80386 ']' 00:09:24.221 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80386 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80386 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80386' 00:09:24.481 killing process with pid 80386 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80386 00:09:24.481 [2024-11-21 04:55:40.996378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.481 04:55:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80386 00:09:24.481 [2024-11-21 04:55:41.021845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yhuvCJVMNT 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:24.742 00:09:24.742 real 0m3.271s 00:09:24.742 user 0m4.182s 00:09:24.742 sys 0m0.523s 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.742 04:55:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 ************************************ 00:09:24.742 END TEST raid_write_error_test 00:09:24.742 ************************************ 00:09:24.742 04:55:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:24.742 04:55:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:24.742 04:55:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:24.742 04:55:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.742 04:55:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.742 04:55:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 ************************************ 00:09:24.742 START TEST raid_state_function_test 00:09:24.742 ************************************ 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80513 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80513' 00:09:24.742 Process raid pid: 80513 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80513 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80513 ']' 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.742 04:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 [2024-11-21 04:55:41.408346] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:24.742 [2024-11-21 04:55:41.408502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.001 [2024-11-21 04:55:41.581855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.001 [2024-11-21 04:55:41.608228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.001 [2024-11-21 04:55:41.650832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.001 [2024-11-21 04:55:41.650871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.569 [2024-11-21 04:55:42.236846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.569 [2024-11-21 04:55:42.236907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.569 [2024-11-21 04:55:42.236917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.569 [2024-11-21 04:55:42.236926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.569 [2024-11-21 04:55:42.236933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.569 [2024-11-21 04:55:42.236943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.569 [2024-11-21 04:55:42.236949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.569 [2024-11-21 04:55:42.236957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.569 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.570 "name": "Existed_Raid", 00:09:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.570 "strip_size_kb": 64, 00:09:25.570 "state": "configuring", 00:09:25.570 "raid_level": "raid0", 00:09:25.570 "superblock": false, 00:09:25.570 "num_base_bdevs": 4, 00:09:25.570 "num_base_bdevs_discovered": 0, 00:09:25.570 "num_base_bdevs_operational": 4, 00:09:25.570 "base_bdevs_list": [ 00:09:25.570 { 00:09:25.570 "name": "BaseBdev1", 00:09:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.570 "is_configured": false, 00:09:25.570 "data_offset": 0, 00:09:25.570 "data_size": 0 00:09:25.570 }, 00:09:25.570 { 00:09:25.570 "name": "BaseBdev2", 00:09:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.570 "is_configured": false, 00:09:25.570 "data_offset": 0, 00:09:25.570 "data_size": 0 00:09:25.570 }, 00:09:25.570 { 00:09:25.570 "name": "BaseBdev3", 00:09:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.570 "is_configured": false, 00:09:25.570 "data_offset": 0, 00:09:25.570 "data_size": 0 00:09:25.570 }, 00:09:25.570 { 00:09:25.570 "name": "BaseBdev4", 00:09:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.570 "is_configured": false, 00:09:25.570 "data_offset": 0, 00:09:25.570 "data_size": 0 00:09:25.570 } 00:09:25.570 ] 00:09:25.570 }' 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.570 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 [2024-11-21 04:55:42.699998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.139 [2024-11-21 04:55:42.700150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 [2024-11-21 04:55:42.711960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.139 [2024-11-21 04:55:42.712047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.139 [2024-11-21 04:55:42.712073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.139 [2024-11-21 04:55:42.712109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.139 [2024-11-21 04:55:42.712129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.139 [2024-11-21 04:55:42.712153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.139 [2024-11-21 04:55:42.712222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:26.139 [2024-11-21 04:55:42.712262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 [2024-11-21 04:55:42.732749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.139 BaseBdev1 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 [ 00:09:26.139 { 00:09:26.139 "name": "BaseBdev1", 00:09:26.139 "aliases": [ 00:09:26.139 "59df1c7c-929e-4221-abf5-372794eef373" 00:09:26.139 ], 00:09:26.139 "product_name": "Malloc disk", 00:09:26.139 "block_size": 512, 00:09:26.139 "num_blocks": 65536, 00:09:26.139 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:26.139 "assigned_rate_limits": { 00:09:26.139 "rw_ios_per_sec": 0, 00:09:26.139 "rw_mbytes_per_sec": 0, 00:09:26.139 "r_mbytes_per_sec": 0, 00:09:26.139 "w_mbytes_per_sec": 0 00:09:26.139 }, 00:09:26.139 "claimed": true, 00:09:26.139 "claim_type": "exclusive_write", 00:09:26.139 "zoned": false, 00:09:26.139 "supported_io_types": { 00:09:26.139 "read": true, 00:09:26.139 "write": true, 00:09:26.139 "unmap": true, 00:09:26.139 "flush": true, 00:09:26.139 "reset": true, 00:09:26.139 "nvme_admin": false, 00:09:26.139 "nvme_io": false, 00:09:26.139 "nvme_io_md": false, 00:09:26.139 "write_zeroes": true, 00:09:26.139 "zcopy": true, 00:09:26.139 "get_zone_info": false, 00:09:26.139 "zone_management": false, 00:09:26.139 "zone_append": false, 00:09:26.139 "compare": false, 00:09:26.139 "compare_and_write": false, 00:09:26.139 "abort": true, 00:09:26.139 "seek_hole": false, 00:09:26.139 "seek_data": false, 00:09:26.139 "copy": true, 00:09:26.139 "nvme_iov_md": false 00:09:26.139 }, 00:09:26.139 "memory_domains": [ 00:09:26.139 { 00:09:26.139 "dma_device_id": "system", 00:09:26.139 "dma_device_type": 1 00:09:26.139 }, 00:09:26.139 { 00:09:26.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.139 "dma_device_type": 2 00:09:26.139 } 00:09:26.139 ], 00:09:26.139 "driver_specific": {} 00:09:26.139 } 00:09:26.139 ] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.139 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.140 "name": "Existed_Raid", 00:09:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.140 "strip_size_kb": 64, 00:09:26.140 "state": "configuring", 00:09:26.140 "raid_level": "raid0", 00:09:26.140 "superblock": false, 00:09:26.140 "num_base_bdevs": 4, 00:09:26.140 "num_base_bdevs_discovered": 1, 00:09:26.140 "num_base_bdevs_operational": 4, 00:09:26.140 "base_bdevs_list": [ 00:09:26.140 { 00:09:26.140 "name": "BaseBdev1", 00:09:26.140 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:26.140 "is_configured": true, 00:09:26.140 "data_offset": 0, 00:09:26.140 "data_size": 65536 00:09:26.140 }, 00:09:26.140 { 00:09:26.140 "name": "BaseBdev2", 00:09:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.140 "is_configured": false, 00:09:26.140 "data_offset": 0, 00:09:26.140 "data_size": 0 00:09:26.140 }, 00:09:26.140 { 00:09:26.140 "name": "BaseBdev3", 00:09:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.140 "is_configured": false, 00:09:26.140 "data_offset": 0, 00:09:26.140 "data_size": 0 00:09:26.140 }, 00:09:26.140 { 00:09:26.140 "name": "BaseBdev4", 00:09:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.140 "is_configured": false, 00:09:26.140 "data_offset": 0, 00:09:26.140 "data_size": 0 00:09:26.140 } 00:09:26.140 ] 00:09:26.140 }' 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.140 04:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.708 [2024-11-21 04:55:43.188027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.708 [2024-11-21 04:55:43.188186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.708 [2024-11-21 04:55:43.200021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.708 [2024-11-21 04:55:43.201902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.708 [2024-11-21 04:55:43.201976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.708 [2024-11-21 04:55:43.202003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.708 [2024-11-21 04:55:43.202025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.708 [2024-11-21 04:55:43.202042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:26.708 [2024-11-21 04:55:43.202062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.708 "name": "Existed_Raid", 00:09:26.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.708 "strip_size_kb": 64, 00:09:26.708 "state": "configuring", 00:09:26.708 "raid_level": "raid0", 00:09:26.708 "superblock": false, 00:09:26.708 "num_base_bdevs": 4, 00:09:26.708 "num_base_bdevs_discovered": 1, 00:09:26.708 "num_base_bdevs_operational": 4, 00:09:26.708 "base_bdevs_list": [ 00:09:26.708 { 00:09:26.708 "name": "BaseBdev1", 00:09:26.708 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:26.708 "is_configured": true, 00:09:26.708 "data_offset": 0, 00:09:26.708 "data_size": 65536 00:09:26.708 }, 00:09:26.708 { 00:09:26.708 "name": "BaseBdev2", 00:09:26.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.708 "is_configured": false, 00:09:26.708 "data_offset": 0, 00:09:26.708 "data_size": 0 00:09:26.708 }, 00:09:26.708 { 00:09:26.708 "name": "BaseBdev3", 00:09:26.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.708 "is_configured": false, 00:09:26.708 "data_offset": 0, 00:09:26.708 "data_size": 0 00:09:26.708 }, 00:09:26.708 { 00:09:26.708 "name": "BaseBdev4", 00:09:26.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.708 "is_configured": false, 00:09:26.708 "data_offset": 0, 00:09:26.708 "data_size": 0 00:09:26.708 } 00:09:26.708 ] 00:09:26.708 }' 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.708 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.968 [2024-11-21 04:55:43.662185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.968 BaseBdev2 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.968 [ 00:09:26.968 { 00:09:26.968 "name": "BaseBdev2", 00:09:26.968 "aliases": [ 00:09:26.968 "ab45d64e-5d19-4be8-a5a7-1af59a5cae16" 00:09:26.968 ], 00:09:26.968 "product_name": "Malloc disk", 00:09:26.968 "block_size": 512, 00:09:26.968 "num_blocks": 65536, 00:09:26.968 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:26.968 "assigned_rate_limits": { 00:09:26.968 "rw_ios_per_sec": 0, 00:09:26.968 "rw_mbytes_per_sec": 0, 00:09:26.968 "r_mbytes_per_sec": 0, 00:09:26.968 "w_mbytes_per_sec": 0 00:09:26.968 }, 00:09:26.968 "claimed": true, 00:09:26.968 "claim_type": "exclusive_write", 00:09:26.968 "zoned": false, 00:09:26.968 "supported_io_types": { 00:09:26.968 "read": true, 00:09:26.968 "write": true, 00:09:26.968 "unmap": true, 00:09:26.968 "flush": true, 00:09:26.968 "reset": true, 00:09:26.968 "nvme_admin": false, 00:09:26.968 "nvme_io": false, 00:09:26.968 "nvme_io_md": false, 00:09:26.968 "write_zeroes": true, 00:09:26.968 "zcopy": true, 00:09:26.968 "get_zone_info": false, 00:09:26.968 "zone_management": false, 00:09:26.968 "zone_append": false, 00:09:26.968 "compare": false, 00:09:26.968 "compare_and_write": false, 00:09:26.968 "abort": true, 00:09:26.968 "seek_hole": false, 00:09:26.968 "seek_data": false, 00:09:26.968 "copy": true, 00:09:26.968 "nvme_iov_md": false 00:09:26.968 }, 00:09:26.968 "memory_domains": [ 00:09:26.968 { 00:09:26.968 "dma_device_id": "system", 00:09:26.968 "dma_device_type": 1 00:09:26.968 }, 00:09:26.968 { 00:09:26.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.968 "dma_device_type": 2 00:09:26.968 } 00:09:26.968 ], 00:09:26.968 "driver_specific": {} 00:09:26.968 } 00:09:26.968 ] 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.968 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.227 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.227 "name": "Existed_Raid", 00:09:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.227 "strip_size_kb": 64, 00:09:27.227 "state": "configuring", 00:09:27.227 "raid_level": "raid0", 00:09:27.227 "superblock": false, 00:09:27.227 "num_base_bdevs": 4, 00:09:27.227 "num_base_bdevs_discovered": 2, 00:09:27.227 "num_base_bdevs_operational": 4, 00:09:27.227 "base_bdevs_list": [ 00:09:27.227 { 00:09:27.227 "name": "BaseBdev1", 00:09:27.227 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:27.227 "is_configured": true, 00:09:27.227 "data_offset": 0, 00:09:27.227 "data_size": 65536 00:09:27.227 }, 00:09:27.227 { 00:09:27.227 "name": "BaseBdev2", 00:09:27.227 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:27.227 "is_configured": true, 00:09:27.227 "data_offset": 0, 00:09:27.227 "data_size": 65536 00:09:27.227 }, 00:09:27.227 { 00:09:27.227 "name": "BaseBdev3", 00:09:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.227 "is_configured": false, 00:09:27.227 "data_offset": 0, 00:09:27.227 "data_size": 0 00:09:27.227 }, 00:09:27.227 { 00:09:27.227 "name": "BaseBdev4", 00:09:27.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.228 "is_configured": false, 00:09:27.228 "data_offset": 0, 00:09:27.228 "data_size": 0 00:09:27.228 } 00:09:27.228 ] 00:09:27.228 }' 00:09:27.228 04:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.228 04:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 BaseBdev3 00:09:27.486 [2024-11-21 04:55:44.147521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 [ 00:09:27.486 { 00:09:27.486 "name": "BaseBdev3", 00:09:27.486 "aliases": [ 00:09:27.486 "cc442c19-7440-4b20-b475-d46db5cd8ba7" 00:09:27.486 ], 00:09:27.486 "product_name": "Malloc disk", 00:09:27.486 "block_size": 512, 00:09:27.486 "num_blocks": 65536, 00:09:27.486 "uuid": "cc442c19-7440-4b20-b475-d46db5cd8ba7", 00:09:27.486 "assigned_rate_limits": { 00:09:27.486 "rw_ios_per_sec": 0, 00:09:27.486 "rw_mbytes_per_sec": 0, 00:09:27.486 "r_mbytes_per_sec": 0, 00:09:27.486 "w_mbytes_per_sec": 0 00:09:27.486 }, 00:09:27.486 "claimed": true, 00:09:27.486 "claim_type": "exclusive_write", 00:09:27.486 "zoned": false, 00:09:27.486 "supported_io_types": { 00:09:27.486 "read": true, 00:09:27.486 "write": true, 00:09:27.486 "unmap": true, 00:09:27.486 "flush": true, 00:09:27.486 "reset": true, 00:09:27.486 "nvme_admin": false, 00:09:27.486 "nvme_io": false, 00:09:27.486 "nvme_io_md": false, 00:09:27.486 "write_zeroes": true, 00:09:27.486 "zcopy": true, 00:09:27.486 "get_zone_info": false, 00:09:27.486 "zone_management": false, 00:09:27.486 "zone_append": false, 00:09:27.486 "compare": false, 00:09:27.486 "compare_and_write": false, 00:09:27.486 "abort": true, 00:09:27.486 "seek_hole": false, 00:09:27.486 "seek_data": false, 00:09:27.486 "copy": true, 00:09:27.486 "nvme_iov_md": false 00:09:27.486 }, 00:09:27.486 "memory_domains": [ 00:09:27.486 { 00:09:27.486 "dma_device_id": "system", 00:09:27.486 "dma_device_type": 1 00:09:27.486 }, 00:09:27.486 { 00:09:27.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.486 "dma_device_type": 2 00:09:27.486 } 00:09:27.486 ], 00:09:27.486 "driver_specific": {} 00:09:27.486 } 00:09:27.486 ] 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.745 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.745 "name": "Existed_Raid", 00:09:27.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.745 "strip_size_kb": 64, 00:09:27.745 "state": "configuring", 00:09:27.745 "raid_level": "raid0", 00:09:27.745 "superblock": false, 00:09:27.745 "num_base_bdevs": 4, 00:09:27.745 "num_base_bdevs_discovered": 3, 00:09:27.745 "num_base_bdevs_operational": 4, 00:09:27.745 "base_bdevs_list": [ 00:09:27.745 { 00:09:27.745 "name": "BaseBdev1", 00:09:27.745 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:27.745 "is_configured": true, 00:09:27.745 "data_offset": 0, 00:09:27.745 "data_size": 65536 00:09:27.745 }, 00:09:27.745 { 00:09:27.745 "name": "BaseBdev2", 00:09:27.745 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:27.745 "is_configured": true, 00:09:27.745 "data_offset": 0, 00:09:27.745 "data_size": 65536 00:09:27.745 }, 00:09:27.745 { 00:09:27.745 "name": "BaseBdev3", 00:09:27.745 "uuid": "cc442c19-7440-4b20-b475-d46db5cd8ba7", 00:09:27.745 "is_configured": true, 00:09:27.745 "data_offset": 0, 00:09:27.745 "data_size": 65536 00:09:27.745 }, 00:09:27.745 { 00:09:27.745 "name": "BaseBdev4", 00:09:27.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.745 "is_configured": false, 00:09:27.745 "data_offset": 0, 00:09:27.745 "data_size": 0 00:09:27.745 } 00:09:27.745 ] 00:09:27.745 }' 00:09:27.745 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.745 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 [2024-11-21 04:55:44.645765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.005 [2024-11-21 04:55:44.645815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:28.005 [2024-11-21 04:55:44.645826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:28.005 [2024-11-21 04:55:44.646128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.005 [2024-11-21 04:55:44.646282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:28.005 [2024-11-21 04:55:44.646295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:28.005 [2024-11-21 04:55:44.646504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.005 BaseBdev4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 [ 00:09:28.005 { 00:09:28.005 "name": "BaseBdev4", 00:09:28.005 "aliases": [ 00:09:28.005 "622825f4-abac-4d10-8ca0-eaccf30633aa" 00:09:28.005 ], 00:09:28.005 "product_name": "Malloc disk", 00:09:28.005 "block_size": 512, 00:09:28.005 "num_blocks": 65536, 00:09:28.005 "uuid": "622825f4-abac-4d10-8ca0-eaccf30633aa", 00:09:28.005 "assigned_rate_limits": { 00:09:28.005 "rw_ios_per_sec": 0, 00:09:28.005 "rw_mbytes_per_sec": 0, 00:09:28.005 "r_mbytes_per_sec": 0, 00:09:28.005 "w_mbytes_per_sec": 0 00:09:28.005 }, 00:09:28.005 "claimed": true, 00:09:28.005 "claim_type": "exclusive_write", 00:09:28.005 "zoned": false, 00:09:28.005 "supported_io_types": { 00:09:28.005 "read": true, 00:09:28.005 "write": true, 00:09:28.005 "unmap": true, 00:09:28.005 "flush": true, 00:09:28.005 "reset": true, 00:09:28.005 "nvme_admin": false, 00:09:28.005 "nvme_io": false, 00:09:28.005 "nvme_io_md": false, 00:09:28.005 "write_zeroes": true, 00:09:28.005 "zcopy": true, 00:09:28.005 "get_zone_info": false, 00:09:28.005 "zone_management": false, 00:09:28.005 "zone_append": false, 00:09:28.005 "compare": false, 00:09:28.005 "compare_and_write": false, 00:09:28.005 "abort": true, 00:09:28.005 "seek_hole": false, 00:09:28.005 "seek_data": false, 00:09:28.005 "copy": true, 00:09:28.005 "nvme_iov_md": false 00:09:28.005 }, 00:09:28.005 "memory_domains": [ 00:09:28.005 { 00:09:28.005 "dma_device_id": "system", 00:09:28.005 "dma_device_type": 1 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.005 "dma_device_type": 2 00:09:28.005 } 00:09:28.005 ], 00:09:28.005 "driver_specific": {} 00:09:28.005 } 00:09:28.005 ] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.005 "name": "Existed_Raid", 00:09:28.005 "uuid": "b95ee1de-050b-4543-ac6f-9822e337f5db", 00:09:28.005 "strip_size_kb": 64, 00:09:28.005 "state": "online", 00:09:28.005 "raid_level": "raid0", 00:09:28.005 "superblock": false, 00:09:28.005 "num_base_bdevs": 4, 00:09:28.005 "num_base_bdevs_discovered": 4, 00:09:28.005 "num_base_bdevs_operational": 4, 00:09:28.005 "base_bdevs_list": [ 00:09:28.005 { 00:09:28.005 "name": "BaseBdev1", 00:09:28.005 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:28.005 "is_configured": true, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "name": "BaseBdev2", 00:09:28.005 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:28.005 "is_configured": true, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "name": "BaseBdev3", 00:09:28.005 "uuid": "cc442c19-7440-4b20-b475-d46db5cd8ba7", 00:09:28.005 "is_configured": true, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "name": "BaseBdev4", 00:09:28.005 "uuid": "622825f4-abac-4d10-8ca0-eaccf30633aa", 00:09:28.005 "is_configured": true, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 } 00:09:28.005 ] 00:09:28.005 }' 00:09:28.005 04:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.006 04:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.574 [2024-11-21 04:55:45.129355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.574 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.574 "name": "Existed_Raid", 00:09:28.574 "aliases": [ 00:09:28.574 "b95ee1de-050b-4543-ac6f-9822e337f5db" 00:09:28.574 ], 00:09:28.574 "product_name": "Raid Volume", 00:09:28.574 "block_size": 512, 00:09:28.574 "num_blocks": 262144, 00:09:28.574 "uuid": "b95ee1de-050b-4543-ac6f-9822e337f5db", 00:09:28.574 "assigned_rate_limits": { 00:09:28.574 "rw_ios_per_sec": 0, 00:09:28.574 "rw_mbytes_per_sec": 0, 00:09:28.574 "r_mbytes_per_sec": 0, 00:09:28.574 "w_mbytes_per_sec": 0 00:09:28.574 }, 00:09:28.574 "claimed": false, 00:09:28.574 "zoned": false, 00:09:28.574 "supported_io_types": { 00:09:28.574 "read": true, 00:09:28.574 "write": true, 00:09:28.574 "unmap": true, 00:09:28.574 "flush": true, 00:09:28.574 "reset": true, 00:09:28.574 "nvme_admin": false, 00:09:28.574 "nvme_io": false, 00:09:28.574 "nvme_io_md": false, 00:09:28.574 "write_zeroes": true, 00:09:28.574 "zcopy": false, 00:09:28.574 "get_zone_info": false, 00:09:28.574 "zone_management": false, 00:09:28.574 "zone_append": false, 00:09:28.574 "compare": false, 00:09:28.574 "compare_and_write": false, 00:09:28.574 "abort": false, 00:09:28.574 "seek_hole": false, 00:09:28.574 "seek_data": false, 00:09:28.574 "copy": false, 00:09:28.574 "nvme_iov_md": false 00:09:28.574 }, 00:09:28.574 "memory_domains": [ 00:09:28.574 { 00:09:28.574 "dma_device_id": "system", 00:09:28.574 "dma_device_type": 1 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.574 "dma_device_type": 2 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "system", 00:09:28.574 "dma_device_type": 1 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.574 "dma_device_type": 2 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "system", 00:09:28.574 "dma_device_type": 1 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.574 "dma_device_type": 2 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "system", 00:09:28.574 "dma_device_type": 1 00:09:28.574 }, 00:09:28.574 { 00:09:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.574 "dma_device_type": 2 00:09:28.574 } 00:09:28.574 ], 00:09:28.574 "driver_specific": { 00:09:28.575 "raid": { 00:09:28.575 "uuid": "b95ee1de-050b-4543-ac6f-9822e337f5db", 00:09:28.575 "strip_size_kb": 64, 00:09:28.575 "state": "online", 00:09:28.575 "raid_level": "raid0", 00:09:28.575 "superblock": false, 00:09:28.575 "num_base_bdevs": 4, 00:09:28.575 "num_base_bdevs_discovered": 4, 00:09:28.575 "num_base_bdevs_operational": 4, 00:09:28.575 "base_bdevs_list": [ 00:09:28.575 { 00:09:28.575 "name": "BaseBdev1", 00:09:28.575 "uuid": "59df1c7c-929e-4221-abf5-372794eef373", 00:09:28.575 "is_configured": true, 00:09:28.575 "data_offset": 0, 00:09:28.575 "data_size": 65536 00:09:28.575 }, 00:09:28.575 { 00:09:28.575 "name": "BaseBdev2", 00:09:28.575 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:28.575 "is_configured": true, 00:09:28.575 "data_offset": 0, 00:09:28.575 "data_size": 65536 00:09:28.575 }, 00:09:28.575 { 00:09:28.575 "name": "BaseBdev3", 00:09:28.575 "uuid": "cc442c19-7440-4b20-b475-d46db5cd8ba7", 00:09:28.575 "is_configured": true, 00:09:28.575 "data_offset": 0, 00:09:28.575 "data_size": 65536 00:09:28.575 }, 00:09:28.575 { 00:09:28.575 "name": "BaseBdev4", 00:09:28.575 "uuid": "622825f4-abac-4d10-8ca0-eaccf30633aa", 00:09:28.575 "is_configured": true, 00:09:28.575 "data_offset": 0, 00:09:28.575 "data_size": 65536 00:09:28.575 } 00:09:28.575 ] 00:09:28.575 } 00:09:28.575 } 00:09:28.575 }' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:28.575 BaseBdev2 00:09:28.575 BaseBdev3 00:09:28.575 BaseBdev4' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.575 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.834 [2024-11-21 04:55:45.416527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.834 [2024-11-21 04:55:45.416557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.834 [2024-11-21 04:55:45.416615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.834 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.835 "name": "Existed_Raid", 00:09:28.835 "uuid": "b95ee1de-050b-4543-ac6f-9822e337f5db", 00:09:28.835 "strip_size_kb": 64, 00:09:28.835 "state": "offline", 00:09:28.835 "raid_level": "raid0", 00:09:28.835 "superblock": false, 00:09:28.835 "num_base_bdevs": 4, 00:09:28.835 "num_base_bdevs_discovered": 3, 00:09:28.835 "num_base_bdevs_operational": 3, 00:09:28.835 "base_bdevs_list": [ 00:09:28.835 { 00:09:28.835 "name": null, 00:09:28.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.835 "is_configured": false, 00:09:28.835 "data_offset": 0, 00:09:28.835 "data_size": 65536 00:09:28.835 }, 00:09:28.835 { 00:09:28.835 "name": "BaseBdev2", 00:09:28.835 "uuid": "ab45d64e-5d19-4be8-a5a7-1af59a5cae16", 00:09:28.835 "is_configured": true, 00:09:28.835 "data_offset": 0, 00:09:28.835 "data_size": 65536 00:09:28.835 }, 00:09:28.835 { 00:09:28.835 "name": "BaseBdev3", 00:09:28.835 "uuid": "cc442c19-7440-4b20-b475-d46db5cd8ba7", 00:09:28.835 "is_configured": true, 00:09:28.835 "data_offset": 0, 00:09:28.835 "data_size": 65536 00:09:28.835 }, 00:09:28.835 { 00:09:28.835 "name": "BaseBdev4", 00:09:28.835 "uuid": "622825f4-abac-4d10-8ca0-eaccf30633aa", 00:09:28.835 "is_configured": true, 00:09:28.835 "data_offset": 0, 00:09:28.835 "data_size": 65536 00:09:28.835 } 00:09:28.835 ] 00:09:28.835 }' 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.835 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 [2024-11-21 04:55:45.914915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 [2024-11-21 04:55:45.986018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.404 04:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 [2024-11-21 04:55:46.053050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:29.404 [2024-11-21 04:55:46.053098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.404 BaseBdev2 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.404 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.673 [ 00:09:29.673 { 00:09:29.673 "name": "BaseBdev2", 00:09:29.673 "aliases": [ 00:09:29.673 "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d" 00:09:29.673 ], 00:09:29.673 "product_name": "Malloc disk", 00:09:29.673 "block_size": 512, 00:09:29.673 "num_blocks": 65536, 00:09:29.673 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:29.673 "assigned_rate_limits": { 00:09:29.673 "rw_ios_per_sec": 0, 00:09:29.673 "rw_mbytes_per_sec": 0, 00:09:29.673 "r_mbytes_per_sec": 0, 00:09:29.673 "w_mbytes_per_sec": 0 00:09:29.673 }, 00:09:29.673 "claimed": false, 00:09:29.673 "zoned": false, 00:09:29.673 "supported_io_types": { 00:09:29.673 "read": true, 00:09:29.673 "write": true, 00:09:29.673 "unmap": true, 00:09:29.673 "flush": true, 00:09:29.673 "reset": true, 00:09:29.673 "nvme_admin": false, 00:09:29.673 "nvme_io": false, 00:09:29.673 "nvme_io_md": false, 00:09:29.673 "write_zeroes": true, 00:09:29.673 "zcopy": true, 00:09:29.673 "get_zone_info": false, 00:09:29.673 "zone_management": false, 00:09:29.673 "zone_append": false, 00:09:29.673 "compare": false, 00:09:29.673 "compare_and_write": false, 00:09:29.673 "abort": true, 00:09:29.673 "seek_hole": false, 00:09:29.673 "seek_data": false, 00:09:29.673 "copy": true, 00:09:29.673 "nvme_iov_md": false 00:09:29.673 }, 00:09:29.673 "memory_domains": [ 00:09:29.673 { 00:09:29.673 "dma_device_id": "system", 00:09:29.673 "dma_device_type": 1 00:09:29.673 }, 00:09:29.673 { 00:09:29.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.673 "dma_device_type": 2 00:09:29.673 } 00:09:29.673 ], 00:09:29.673 "driver_specific": {} 00:09:29.673 } 00:09:29.673 ] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.673 BaseBdev3 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.673 [ 00:09:29.673 { 00:09:29.673 "name": "BaseBdev3", 00:09:29.673 "aliases": [ 00:09:29.673 "b417d18f-e601-4afd-9a48-9eb08482096d" 00:09:29.673 ], 00:09:29.673 "product_name": "Malloc disk", 00:09:29.673 "block_size": 512, 00:09:29.673 "num_blocks": 65536, 00:09:29.673 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:29.673 "assigned_rate_limits": { 00:09:29.673 "rw_ios_per_sec": 0, 00:09:29.673 "rw_mbytes_per_sec": 0, 00:09:29.673 "r_mbytes_per_sec": 0, 00:09:29.673 "w_mbytes_per_sec": 0 00:09:29.673 }, 00:09:29.673 "claimed": false, 00:09:29.673 "zoned": false, 00:09:29.673 "supported_io_types": { 00:09:29.673 "read": true, 00:09:29.673 "write": true, 00:09:29.673 "unmap": true, 00:09:29.673 "flush": true, 00:09:29.673 "reset": true, 00:09:29.673 "nvme_admin": false, 00:09:29.673 "nvme_io": false, 00:09:29.673 "nvme_io_md": false, 00:09:29.673 "write_zeroes": true, 00:09:29.673 "zcopy": true, 00:09:29.673 "get_zone_info": false, 00:09:29.673 "zone_management": false, 00:09:29.673 "zone_append": false, 00:09:29.673 "compare": false, 00:09:29.673 "compare_and_write": false, 00:09:29.673 "abort": true, 00:09:29.673 "seek_hole": false, 00:09:29.673 "seek_data": false, 00:09:29.673 "copy": true, 00:09:29.673 "nvme_iov_md": false 00:09:29.673 }, 00:09:29.673 "memory_domains": [ 00:09:29.673 { 00:09:29.673 "dma_device_id": "system", 00:09:29.673 "dma_device_type": 1 00:09:29.673 }, 00:09:29.673 { 00:09:29.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.673 "dma_device_type": 2 00:09:29.673 } 00:09:29.673 ], 00:09:29.673 "driver_specific": {} 00:09:29.673 } 00:09:29.673 ] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.673 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.674 BaseBdev4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.674 [ 00:09:29.674 { 00:09:29.674 "name": "BaseBdev4", 00:09:29.674 "aliases": [ 00:09:29.674 "3578a925-aaf6-40a5-8e28-cfc1e41dc170" 00:09:29.674 ], 00:09:29.674 "product_name": "Malloc disk", 00:09:29.674 "block_size": 512, 00:09:29.674 "num_blocks": 65536, 00:09:29.674 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:29.674 "assigned_rate_limits": { 00:09:29.674 "rw_ios_per_sec": 0, 00:09:29.674 "rw_mbytes_per_sec": 0, 00:09:29.674 "r_mbytes_per_sec": 0, 00:09:29.674 "w_mbytes_per_sec": 0 00:09:29.674 }, 00:09:29.674 "claimed": false, 00:09:29.674 "zoned": false, 00:09:29.674 "supported_io_types": { 00:09:29.674 "read": true, 00:09:29.674 "write": true, 00:09:29.674 "unmap": true, 00:09:29.674 "flush": true, 00:09:29.674 "reset": true, 00:09:29.674 "nvme_admin": false, 00:09:29.674 "nvme_io": false, 00:09:29.674 "nvme_io_md": false, 00:09:29.674 "write_zeroes": true, 00:09:29.674 "zcopy": true, 00:09:29.674 "get_zone_info": false, 00:09:29.674 "zone_management": false, 00:09:29.674 "zone_append": false, 00:09:29.674 "compare": false, 00:09:29.674 "compare_and_write": false, 00:09:29.674 "abort": true, 00:09:29.674 "seek_hole": false, 00:09:29.674 "seek_data": false, 00:09:29.674 "copy": true, 00:09:29.674 "nvme_iov_md": false 00:09:29.674 }, 00:09:29.674 "memory_domains": [ 00:09:29.674 { 00:09:29.674 "dma_device_id": "system", 00:09:29.674 "dma_device_type": 1 00:09:29.674 }, 00:09:29.674 { 00:09:29.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.674 "dma_device_type": 2 00:09:29.674 } 00:09:29.674 ], 00:09:29.674 "driver_specific": {} 00:09:29.674 } 00:09:29.674 ] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.674 [2024-11-21 04:55:46.280924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.674 [2024-11-21 04:55:46.281077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.674 [2024-11-21 04:55:46.281150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.674 [2024-11-21 04:55:46.283194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.674 [2024-11-21 04:55:46.283318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.674 "name": "Existed_Raid", 00:09:29.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.674 "strip_size_kb": 64, 00:09:29.674 "state": "configuring", 00:09:29.674 "raid_level": "raid0", 00:09:29.674 "superblock": false, 00:09:29.674 "num_base_bdevs": 4, 00:09:29.674 "num_base_bdevs_discovered": 3, 00:09:29.674 "num_base_bdevs_operational": 4, 00:09:29.674 "base_bdevs_list": [ 00:09:29.674 { 00:09:29.674 "name": "BaseBdev1", 00:09:29.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.674 "is_configured": false, 00:09:29.674 "data_offset": 0, 00:09:29.674 "data_size": 0 00:09:29.674 }, 00:09:29.674 { 00:09:29.674 "name": "BaseBdev2", 00:09:29.674 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:29.674 "is_configured": true, 00:09:29.674 "data_offset": 0, 00:09:29.674 "data_size": 65536 00:09:29.674 }, 00:09:29.674 { 00:09:29.674 "name": "BaseBdev3", 00:09:29.674 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:29.674 "is_configured": true, 00:09:29.674 "data_offset": 0, 00:09:29.674 "data_size": 65536 00:09:29.674 }, 00:09:29.674 { 00:09:29.674 "name": "BaseBdev4", 00:09:29.674 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:29.674 "is_configured": true, 00:09:29.674 "data_offset": 0, 00:09:29.674 "data_size": 65536 00:09:29.674 } 00:09:29.674 ] 00:09:29.674 }' 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.674 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.257 [2024-11-21 04:55:46.740130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.257 "name": "Existed_Raid", 00:09:30.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.257 "strip_size_kb": 64, 00:09:30.257 "state": "configuring", 00:09:30.257 "raid_level": "raid0", 00:09:30.257 "superblock": false, 00:09:30.257 "num_base_bdevs": 4, 00:09:30.257 "num_base_bdevs_discovered": 2, 00:09:30.257 "num_base_bdevs_operational": 4, 00:09:30.257 "base_bdevs_list": [ 00:09:30.257 { 00:09:30.257 "name": "BaseBdev1", 00:09:30.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.257 "is_configured": false, 00:09:30.257 "data_offset": 0, 00:09:30.257 "data_size": 0 00:09:30.257 }, 00:09:30.257 { 00:09:30.257 "name": null, 00:09:30.257 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:30.257 "is_configured": false, 00:09:30.257 "data_offset": 0, 00:09:30.257 "data_size": 65536 00:09:30.257 }, 00:09:30.257 { 00:09:30.257 "name": "BaseBdev3", 00:09:30.257 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:30.257 "is_configured": true, 00:09:30.257 "data_offset": 0, 00:09:30.257 "data_size": 65536 00:09:30.257 }, 00:09:30.257 { 00:09:30.257 "name": "BaseBdev4", 00:09:30.257 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:30.257 "is_configured": true, 00:09:30.257 "data_offset": 0, 00:09:30.257 "data_size": 65536 00:09:30.257 } 00:09:30.257 ] 00:09:30.257 }' 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.257 04:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.516 [2024-11-21 04:55:47.226222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.516 BaseBdev1 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.516 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.775 [ 00:09:30.775 { 00:09:30.775 "name": "BaseBdev1", 00:09:30.775 "aliases": [ 00:09:30.775 "fde70763-f836-483d-b1e6-0021d9f9a652" 00:09:30.775 ], 00:09:30.775 "product_name": "Malloc disk", 00:09:30.775 "block_size": 512, 00:09:30.775 "num_blocks": 65536, 00:09:30.775 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:30.775 "assigned_rate_limits": { 00:09:30.775 "rw_ios_per_sec": 0, 00:09:30.775 "rw_mbytes_per_sec": 0, 00:09:30.775 "r_mbytes_per_sec": 0, 00:09:30.775 "w_mbytes_per_sec": 0 00:09:30.775 }, 00:09:30.775 "claimed": true, 00:09:30.775 "claim_type": "exclusive_write", 00:09:30.775 "zoned": false, 00:09:30.775 "supported_io_types": { 00:09:30.775 "read": true, 00:09:30.775 "write": true, 00:09:30.775 "unmap": true, 00:09:30.775 "flush": true, 00:09:30.775 "reset": true, 00:09:30.775 "nvme_admin": false, 00:09:30.775 "nvme_io": false, 00:09:30.775 "nvme_io_md": false, 00:09:30.775 "write_zeroes": true, 00:09:30.775 "zcopy": true, 00:09:30.775 "get_zone_info": false, 00:09:30.775 "zone_management": false, 00:09:30.775 "zone_append": false, 00:09:30.775 "compare": false, 00:09:30.775 "compare_and_write": false, 00:09:30.775 "abort": true, 00:09:30.775 "seek_hole": false, 00:09:30.775 "seek_data": false, 00:09:30.775 "copy": true, 00:09:30.775 "nvme_iov_md": false 00:09:30.775 }, 00:09:30.775 "memory_domains": [ 00:09:30.775 { 00:09:30.775 "dma_device_id": "system", 00:09:30.775 "dma_device_type": 1 00:09:30.775 }, 00:09:30.775 { 00:09:30.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.775 "dma_device_type": 2 00:09:30.775 } 00:09:30.775 ], 00:09:30.775 "driver_specific": {} 00:09:30.775 } 00:09:30.775 ] 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.775 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.775 "name": "Existed_Raid", 00:09:30.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.775 "strip_size_kb": 64, 00:09:30.775 "state": "configuring", 00:09:30.775 "raid_level": "raid0", 00:09:30.775 "superblock": false, 00:09:30.775 "num_base_bdevs": 4, 00:09:30.775 "num_base_bdevs_discovered": 3, 00:09:30.775 "num_base_bdevs_operational": 4, 00:09:30.775 "base_bdevs_list": [ 00:09:30.775 { 00:09:30.775 "name": "BaseBdev1", 00:09:30.775 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:30.775 "is_configured": true, 00:09:30.775 "data_offset": 0, 00:09:30.775 "data_size": 65536 00:09:30.775 }, 00:09:30.775 { 00:09:30.775 "name": null, 00:09:30.775 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:30.775 "is_configured": false, 00:09:30.775 "data_offset": 0, 00:09:30.775 "data_size": 65536 00:09:30.775 }, 00:09:30.775 { 00:09:30.775 "name": "BaseBdev3", 00:09:30.775 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:30.776 "is_configured": true, 00:09:30.776 "data_offset": 0, 00:09:30.776 "data_size": 65536 00:09:30.776 }, 00:09:30.776 { 00:09:30.776 "name": "BaseBdev4", 00:09:30.776 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:30.776 "is_configured": true, 00:09:30.776 "data_offset": 0, 00:09:30.776 "data_size": 65536 00:09:30.776 } 00:09:30.776 ] 00:09:30.776 }' 00:09:30.776 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.776 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.034 [2024-11-21 04:55:47.733382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.034 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.035 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.294 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.294 "name": "Existed_Raid", 00:09:31.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.294 "strip_size_kb": 64, 00:09:31.294 "state": "configuring", 00:09:31.294 "raid_level": "raid0", 00:09:31.294 "superblock": false, 00:09:31.294 "num_base_bdevs": 4, 00:09:31.294 "num_base_bdevs_discovered": 2, 00:09:31.294 "num_base_bdevs_operational": 4, 00:09:31.294 "base_bdevs_list": [ 00:09:31.294 { 00:09:31.294 "name": "BaseBdev1", 00:09:31.294 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:31.294 "is_configured": true, 00:09:31.294 "data_offset": 0, 00:09:31.294 "data_size": 65536 00:09:31.294 }, 00:09:31.294 { 00:09:31.294 "name": null, 00:09:31.294 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:31.294 "is_configured": false, 00:09:31.294 "data_offset": 0, 00:09:31.294 "data_size": 65536 00:09:31.294 }, 00:09:31.294 { 00:09:31.294 "name": null, 00:09:31.294 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:31.294 "is_configured": false, 00:09:31.294 "data_offset": 0, 00:09:31.294 "data_size": 65536 00:09:31.294 }, 00:09:31.294 { 00:09:31.294 "name": "BaseBdev4", 00:09:31.294 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:31.294 "is_configured": true, 00:09:31.294 "data_offset": 0, 00:09:31.294 "data_size": 65536 00:09:31.294 } 00:09:31.294 ] 00:09:31.294 }' 00:09:31.294 04:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.294 04:55:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 [2024-11-21 04:55:48.192643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.554 "name": "Existed_Raid", 00:09:31.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.554 "strip_size_kb": 64, 00:09:31.554 "state": "configuring", 00:09:31.554 "raid_level": "raid0", 00:09:31.554 "superblock": false, 00:09:31.554 "num_base_bdevs": 4, 00:09:31.554 "num_base_bdevs_discovered": 3, 00:09:31.554 "num_base_bdevs_operational": 4, 00:09:31.554 "base_bdevs_list": [ 00:09:31.554 { 00:09:31.554 "name": "BaseBdev1", 00:09:31.554 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:31.554 "is_configured": true, 00:09:31.554 "data_offset": 0, 00:09:31.554 "data_size": 65536 00:09:31.554 }, 00:09:31.554 { 00:09:31.554 "name": null, 00:09:31.554 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:31.554 "is_configured": false, 00:09:31.554 "data_offset": 0, 00:09:31.554 "data_size": 65536 00:09:31.554 }, 00:09:31.554 { 00:09:31.554 "name": "BaseBdev3", 00:09:31.554 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:31.554 "is_configured": true, 00:09:31.554 "data_offset": 0, 00:09:31.554 "data_size": 65536 00:09:31.554 }, 00:09:31.554 { 00:09:31.554 "name": "BaseBdev4", 00:09:31.554 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:31.554 "is_configured": true, 00:09:31.554 "data_offset": 0, 00:09:31.554 "data_size": 65536 00:09:31.554 } 00:09:31.554 ] 00:09:31.554 }' 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.554 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.123 [2024-11-21 04:55:48.699834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.123 "name": "Existed_Raid", 00:09:32.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.123 "strip_size_kb": 64, 00:09:32.123 "state": "configuring", 00:09:32.123 "raid_level": "raid0", 00:09:32.123 "superblock": false, 00:09:32.123 "num_base_bdevs": 4, 00:09:32.123 "num_base_bdevs_discovered": 2, 00:09:32.123 "num_base_bdevs_operational": 4, 00:09:32.123 "base_bdevs_list": [ 00:09:32.123 { 00:09:32.123 "name": null, 00:09:32.123 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:32.123 "is_configured": false, 00:09:32.123 "data_offset": 0, 00:09:32.123 "data_size": 65536 00:09:32.123 }, 00:09:32.123 { 00:09:32.123 "name": null, 00:09:32.123 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:32.123 "is_configured": false, 00:09:32.123 "data_offset": 0, 00:09:32.123 "data_size": 65536 00:09:32.123 }, 00:09:32.123 { 00:09:32.123 "name": "BaseBdev3", 00:09:32.123 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:32.123 "is_configured": true, 00:09:32.123 "data_offset": 0, 00:09:32.123 "data_size": 65536 00:09:32.123 }, 00:09:32.123 { 00:09:32.123 "name": "BaseBdev4", 00:09:32.123 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:32.123 "is_configured": true, 00:09:32.123 "data_offset": 0, 00:09:32.123 "data_size": 65536 00:09:32.123 } 00:09:32.123 ] 00:09:32.123 }' 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.123 04:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.689 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 [2024-11-21 04:55:49.233533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.690 "name": "Existed_Raid", 00:09:32.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.690 "strip_size_kb": 64, 00:09:32.690 "state": "configuring", 00:09:32.690 "raid_level": "raid0", 00:09:32.690 "superblock": false, 00:09:32.690 "num_base_bdevs": 4, 00:09:32.690 "num_base_bdevs_discovered": 3, 00:09:32.690 "num_base_bdevs_operational": 4, 00:09:32.690 "base_bdevs_list": [ 00:09:32.690 { 00:09:32.690 "name": null, 00:09:32.690 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:32.690 "is_configured": false, 00:09:32.690 "data_offset": 0, 00:09:32.690 "data_size": 65536 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "name": "BaseBdev2", 00:09:32.690 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:32.690 "is_configured": true, 00:09:32.690 "data_offset": 0, 00:09:32.690 "data_size": 65536 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "name": "BaseBdev3", 00:09:32.690 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:32.690 "is_configured": true, 00:09:32.690 "data_offset": 0, 00:09:32.690 "data_size": 65536 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "name": "BaseBdev4", 00:09:32.690 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:32.690 "is_configured": true, 00:09:32.690 "data_offset": 0, 00:09:32.690 "data_size": 65536 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 }' 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.690 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fde70763-f836-483d-b1e6-0021d9f9a652 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 [2024-11-21 04:55:49.795629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.258 [2024-11-21 04:55:49.795769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:33.258 [2024-11-21 04:55:49.795798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:33.258 [2024-11-21 04:55:49.796166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.258 [2024-11-21 04:55:49.796349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:33.258 [2024-11-21 04:55:49.796398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:33.258 [2024-11-21 04:55:49.796670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.258 NewBaseBdev 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.258 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.258 [ 00:09:33.258 { 00:09:33.258 "name": "NewBaseBdev", 00:09:33.258 "aliases": [ 00:09:33.258 "fde70763-f836-483d-b1e6-0021d9f9a652" 00:09:33.258 ], 00:09:33.258 "product_name": "Malloc disk", 00:09:33.258 "block_size": 512, 00:09:33.258 "num_blocks": 65536, 00:09:33.258 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:33.258 "assigned_rate_limits": { 00:09:33.258 "rw_ios_per_sec": 0, 00:09:33.258 "rw_mbytes_per_sec": 0, 00:09:33.258 "r_mbytes_per_sec": 0, 00:09:33.258 "w_mbytes_per_sec": 0 00:09:33.258 }, 00:09:33.258 "claimed": true, 00:09:33.258 "claim_type": "exclusive_write", 00:09:33.258 "zoned": false, 00:09:33.258 "supported_io_types": { 00:09:33.258 "read": true, 00:09:33.258 "write": true, 00:09:33.258 "unmap": true, 00:09:33.258 "flush": true, 00:09:33.258 "reset": true, 00:09:33.258 "nvme_admin": false, 00:09:33.258 "nvme_io": false, 00:09:33.258 "nvme_io_md": false, 00:09:33.258 "write_zeroes": true, 00:09:33.258 "zcopy": true, 00:09:33.258 "get_zone_info": false, 00:09:33.258 "zone_management": false, 00:09:33.258 "zone_append": false, 00:09:33.258 "compare": false, 00:09:33.258 "compare_and_write": false, 00:09:33.258 "abort": true, 00:09:33.258 "seek_hole": false, 00:09:33.258 "seek_data": false, 00:09:33.258 "copy": true, 00:09:33.258 "nvme_iov_md": false 00:09:33.258 }, 00:09:33.258 "memory_domains": [ 00:09:33.258 { 00:09:33.258 "dma_device_id": "system", 00:09:33.258 "dma_device_type": 1 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.259 "dma_device_type": 2 00:09:33.259 } 00:09:33.259 ], 00:09:33.259 "driver_specific": {} 00:09:33.259 } 00:09:33.259 ] 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.259 "name": "Existed_Raid", 00:09:33.259 "uuid": "e3ac6a4f-142e-491e-98a7-7da683e31bcb", 00:09:33.259 "strip_size_kb": 64, 00:09:33.259 "state": "online", 00:09:33.259 "raid_level": "raid0", 00:09:33.259 "superblock": false, 00:09:33.259 "num_base_bdevs": 4, 00:09:33.259 "num_base_bdevs_discovered": 4, 00:09:33.259 "num_base_bdevs_operational": 4, 00:09:33.259 "base_bdevs_list": [ 00:09:33.259 { 00:09:33.259 "name": "NewBaseBdev", 00:09:33.259 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:33.259 "is_configured": true, 00:09:33.259 "data_offset": 0, 00:09:33.259 "data_size": 65536 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "name": "BaseBdev2", 00:09:33.259 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:33.259 "is_configured": true, 00:09:33.259 "data_offset": 0, 00:09:33.259 "data_size": 65536 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "name": "BaseBdev3", 00:09:33.259 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:33.259 "is_configured": true, 00:09:33.259 "data_offset": 0, 00:09:33.259 "data_size": 65536 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "name": "BaseBdev4", 00:09:33.259 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:33.259 "is_configured": true, 00:09:33.259 "data_offset": 0, 00:09:33.259 "data_size": 65536 00:09:33.259 } 00:09:33.259 ] 00:09:33.259 }' 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.259 04:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.827 [2024-11-21 04:55:50.315350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.827 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.827 "name": "Existed_Raid", 00:09:33.827 "aliases": [ 00:09:33.827 "e3ac6a4f-142e-491e-98a7-7da683e31bcb" 00:09:33.827 ], 00:09:33.827 "product_name": "Raid Volume", 00:09:33.827 "block_size": 512, 00:09:33.827 "num_blocks": 262144, 00:09:33.827 "uuid": "e3ac6a4f-142e-491e-98a7-7da683e31bcb", 00:09:33.827 "assigned_rate_limits": { 00:09:33.827 "rw_ios_per_sec": 0, 00:09:33.827 "rw_mbytes_per_sec": 0, 00:09:33.827 "r_mbytes_per_sec": 0, 00:09:33.827 "w_mbytes_per_sec": 0 00:09:33.827 }, 00:09:33.827 "claimed": false, 00:09:33.827 "zoned": false, 00:09:33.827 "supported_io_types": { 00:09:33.827 "read": true, 00:09:33.827 "write": true, 00:09:33.827 "unmap": true, 00:09:33.827 "flush": true, 00:09:33.827 "reset": true, 00:09:33.827 "nvme_admin": false, 00:09:33.827 "nvme_io": false, 00:09:33.827 "nvme_io_md": false, 00:09:33.827 "write_zeroes": true, 00:09:33.827 "zcopy": false, 00:09:33.827 "get_zone_info": false, 00:09:33.827 "zone_management": false, 00:09:33.827 "zone_append": false, 00:09:33.827 "compare": false, 00:09:33.827 "compare_and_write": false, 00:09:33.827 "abort": false, 00:09:33.827 "seek_hole": false, 00:09:33.827 "seek_data": false, 00:09:33.827 "copy": false, 00:09:33.827 "nvme_iov_md": false 00:09:33.827 }, 00:09:33.827 "memory_domains": [ 00:09:33.827 { 00:09:33.827 "dma_device_id": "system", 00:09:33.827 "dma_device_type": 1 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.827 "dma_device_type": 2 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "system", 00:09:33.827 "dma_device_type": 1 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.827 "dma_device_type": 2 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "system", 00:09:33.827 "dma_device_type": 1 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.827 "dma_device_type": 2 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "system", 00:09:33.827 "dma_device_type": 1 00:09:33.827 }, 00:09:33.827 { 00:09:33.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.827 "dma_device_type": 2 00:09:33.827 } 00:09:33.827 ], 00:09:33.827 "driver_specific": { 00:09:33.827 "raid": { 00:09:33.827 "uuid": "e3ac6a4f-142e-491e-98a7-7da683e31bcb", 00:09:33.827 "strip_size_kb": 64, 00:09:33.827 "state": "online", 00:09:33.827 "raid_level": "raid0", 00:09:33.827 "superblock": false, 00:09:33.827 "num_base_bdevs": 4, 00:09:33.827 "num_base_bdevs_discovered": 4, 00:09:33.827 "num_base_bdevs_operational": 4, 00:09:33.827 "base_bdevs_list": [ 00:09:33.827 { 00:09:33.828 "name": "NewBaseBdev", 00:09:33.828 "uuid": "fde70763-f836-483d-b1e6-0021d9f9a652", 00:09:33.828 "is_configured": true, 00:09:33.828 "data_offset": 0, 00:09:33.828 "data_size": 65536 00:09:33.828 }, 00:09:33.828 { 00:09:33.828 "name": "BaseBdev2", 00:09:33.828 "uuid": "d1230fe5-7fbb-4d77-81cd-b5eee0cf2c3d", 00:09:33.828 "is_configured": true, 00:09:33.828 "data_offset": 0, 00:09:33.828 "data_size": 65536 00:09:33.828 }, 00:09:33.828 { 00:09:33.828 "name": "BaseBdev3", 00:09:33.828 "uuid": "b417d18f-e601-4afd-9a48-9eb08482096d", 00:09:33.828 "is_configured": true, 00:09:33.828 "data_offset": 0, 00:09:33.828 "data_size": 65536 00:09:33.828 }, 00:09:33.828 { 00:09:33.828 "name": "BaseBdev4", 00:09:33.828 "uuid": "3578a925-aaf6-40a5-8e28-cfc1e41dc170", 00:09:33.828 "is_configured": true, 00:09:33.828 "data_offset": 0, 00:09:33.828 "data_size": 65536 00:09:33.828 } 00:09:33.828 ] 00:09:33.828 } 00:09:33.828 } 00:09:33.828 }' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.828 BaseBdev2 00:09:33.828 BaseBdev3 00:09:33.828 BaseBdev4' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.828 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.087 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.087 [2024-11-21 04:55:50.622440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.087 [2024-11-21 04:55:50.622472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.087 [2024-11-21 04:55:50.622543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.087 [2024-11-21 04:55:50.622607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.088 [2024-11-21 04:55:50.622618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80513 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80513 ']' 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80513 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80513 00:09:34.088 killing process with pid 80513 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80513' 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80513 00:09:34.088 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80513 00:09:34.088 [2024-11-21 04:55:50.666178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.088 [2024-11-21 04:55:50.706721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.348 00:09:34.348 real 0m9.609s 00:09:34.348 user 0m16.425s 00:09:34.348 sys 0m2.082s 00:09:34.348 ************************************ 00:09:34.348 END TEST raid_state_function_test 00:09:34.348 ************************************ 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.348 04:55:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:34.348 04:55:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.348 04:55:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.348 04:55:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.348 ************************************ 00:09:34.348 START TEST raid_state_function_test_sb 00:09:34.348 ************************************ 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:34.348 Process raid pid: 81162 00:09:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81162 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81162' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81162 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81162 ']' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.348 04:55:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:34.348 [2024-11-21 04:55:51.079649] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:34.348 [2024-11-21 04:55:51.080182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.608 [2024-11-21 04:55:51.252503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.608 [2024-11-21 04:55:51.278533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.608 [2024-11-21 04:55:51.320443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.608 [2024-11-21 04:55:51.320484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.565 [2024-11-21 04:55:51.925768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.565 [2024-11-21 04:55:51.925828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.565 [2024-11-21 04:55:51.925838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.565 [2024-11-21 04:55:51.925850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.565 [2024-11-21 04:55:51.925856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.565 [2024-11-21 04:55:51.925867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.565 [2024-11-21 04:55:51.925873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.565 [2024-11-21 04:55:51.925882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.565 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.566 "name": "Existed_Raid", 00:09:35.566 "uuid": "7a872762-f0b8-47c0-bb62-fdb110a6920e", 00:09:35.566 "strip_size_kb": 64, 00:09:35.566 "state": "configuring", 00:09:35.566 "raid_level": "raid0", 00:09:35.566 "superblock": true, 00:09:35.566 "num_base_bdevs": 4, 00:09:35.566 "num_base_bdevs_discovered": 0, 00:09:35.566 "num_base_bdevs_operational": 4, 00:09:35.566 "base_bdevs_list": [ 00:09:35.566 { 00:09:35.566 "name": "BaseBdev1", 00:09:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.566 "is_configured": false, 00:09:35.566 "data_offset": 0, 00:09:35.566 "data_size": 0 00:09:35.566 }, 00:09:35.566 { 00:09:35.566 "name": "BaseBdev2", 00:09:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.566 "is_configured": false, 00:09:35.566 "data_offset": 0, 00:09:35.566 "data_size": 0 00:09:35.566 }, 00:09:35.566 { 00:09:35.566 "name": "BaseBdev3", 00:09:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.566 "is_configured": false, 00:09:35.566 "data_offset": 0, 00:09:35.566 "data_size": 0 00:09:35.566 }, 00:09:35.566 { 00:09:35.566 "name": "BaseBdev4", 00:09:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.566 "is_configured": false, 00:09:35.566 "data_offset": 0, 00:09:35.566 "data_size": 0 00:09:35.566 } 00:09:35.566 ] 00:09:35.566 }' 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.566 04:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.826 [2024-11-21 04:55:52.328972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.826 [2024-11-21 04:55:52.329071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.826 [2024-11-21 04:55:52.336968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.826 [2024-11-21 04:55:52.337066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.826 [2024-11-21 04:55:52.337094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.826 [2024-11-21 04:55:52.337131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.826 [2024-11-21 04:55:52.337156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.826 [2024-11-21 04:55:52.337193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.826 [2024-11-21 04:55:52.337212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.826 [2024-11-21 04:55:52.337245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.826 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.827 [2024-11-21 04:55:52.353700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.827 BaseBdev1 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.827 [ 00:09:35.827 { 00:09:35.827 "name": "BaseBdev1", 00:09:35.827 "aliases": [ 00:09:35.827 "030e99e2-f4a4-4422-b72d-fb96c612f4fa" 00:09:35.827 ], 00:09:35.827 "product_name": "Malloc disk", 00:09:35.827 "block_size": 512, 00:09:35.827 "num_blocks": 65536, 00:09:35.827 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:35.827 "assigned_rate_limits": { 00:09:35.827 "rw_ios_per_sec": 0, 00:09:35.827 "rw_mbytes_per_sec": 0, 00:09:35.827 "r_mbytes_per_sec": 0, 00:09:35.827 "w_mbytes_per_sec": 0 00:09:35.827 }, 00:09:35.827 "claimed": true, 00:09:35.827 "claim_type": "exclusive_write", 00:09:35.827 "zoned": false, 00:09:35.827 "supported_io_types": { 00:09:35.827 "read": true, 00:09:35.827 "write": true, 00:09:35.827 "unmap": true, 00:09:35.827 "flush": true, 00:09:35.827 "reset": true, 00:09:35.827 "nvme_admin": false, 00:09:35.827 "nvme_io": false, 00:09:35.827 "nvme_io_md": false, 00:09:35.827 "write_zeroes": true, 00:09:35.827 "zcopy": true, 00:09:35.827 "get_zone_info": false, 00:09:35.827 "zone_management": false, 00:09:35.827 "zone_append": false, 00:09:35.827 "compare": false, 00:09:35.827 "compare_and_write": false, 00:09:35.827 "abort": true, 00:09:35.827 "seek_hole": false, 00:09:35.827 "seek_data": false, 00:09:35.827 "copy": true, 00:09:35.827 "nvme_iov_md": false 00:09:35.827 }, 00:09:35.827 "memory_domains": [ 00:09:35.827 { 00:09:35.827 "dma_device_id": "system", 00:09:35.827 "dma_device_type": 1 00:09:35.827 }, 00:09:35.827 { 00:09:35.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.827 "dma_device_type": 2 00:09:35.827 } 00:09:35.827 ], 00:09:35.827 "driver_specific": {} 00:09:35.827 } 00:09:35.827 ] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.827 "name": "Existed_Raid", 00:09:35.827 "uuid": "8fc8fb20-0253-476e-9825-adfc0cce718e", 00:09:35.827 "strip_size_kb": 64, 00:09:35.827 "state": "configuring", 00:09:35.827 "raid_level": "raid0", 00:09:35.827 "superblock": true, 00:09:35.827 "num_base_bdevs": 4, 00:09:35.827 "num_base_bdevs_discovered": 1, 00:09:35.827 "num_base_bdevs_operational": 4, 00:09:35.827 "base_bdevs_list": [ 00:09:35.827 { 00:09:35.827 "name": "BaseBdev1", 00:09:35.827 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:35.827 "is_configured": true, 00:09:35.827 "data_offset": 2048, 00:09:35.827 "data_size": 63488 00:09:35.827 }, 00:09:35.827 { 00:09:35.827 "name": "BaseBdev2", 00:09:35.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.827 "is_configured": false, 00:09:35.827 "data_offset": 0, 00:09:35.827 "data_size": 0 00:09:35.827 }, 00:09:35.827 { 00:09:35.827 "name": "BaseBdev3", 00:09:35.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.827 "is_configured": false, 00:09:35.827 "data_offset": 0, 00:09:35.827 "data_size": 0 00:09:35.827 }, 00:09:35.827 { 00:09:35.827 "name": "BaseBdev4", 00:09:35.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.827 "is_configured": false, 00:09:35.827 "data_offset": 0, 00:09:35.827 "data_size": 0 00:09:35.827 } 00:09:35.827 ] 00:09:35.827 }' 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.827 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.396 [2024-11-21 04:55:52.848875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.396 [2024-11-21 04:55:52.848919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.396 [2024-11-21 04:55:52.860889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.396 [2024-11-21 04:55:52.862756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.396 [2024-11-21 04:55:52.862794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.396 [2024-11-21 04:55:52.862804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.396 [2024-11-21 04:55:52.862813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.396 [2024-11-21 04:55:52.862820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:36.396 [2024-11-21 04:55:52.862828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.396 "name": "Existed_Raid", 00:09:36.396 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:36.396 "strip_size_kb": 64, 00:09:36.396 "state": "configuring", 00:09:36.396 "raid_level": "raid0", 00:09:36.396 "superblock": true, 00:09:36.396 "num_base_bdevs": 4, 00:09:36.396 "num_base_bdevs_discovered": 1, 00:09:36.396 "num_base_bdevs_operational": 4, 00:09:36.396 "base_bdevs_list": [ 00:09:36.396 { 00:09:36.396 "name": "BaseBdev1", 00:09:36.396 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:36.396 "is_configured": true, 00:09:36.396 "data_offset": 2048, 00:09:36.396 "data_size": 63488 00:09:36.396 }, 00:09:36.396 { 00:09:36.396 "name": "BaseBdev2", 00:09:36.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.396 "is_configured": false, 00:09:36.396 "data_offset": 0, 00:09:36.396 "data_size": 0 00:09:36.396 }, 00:09:36.396 { 00:09:36.396 "name": "BaseBdev3", 00:09:36.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.396 "is_configured": false, 00:09:36.396 "data_offset": 0, 00:09:36.396 "data_size": 0 00:09:36.396 }, 00:09:36.396 { 00:09:36.396 "name": "BaseBdev4", 00:09:36.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.396 "is_configured": false, 00:09:36.396 "data_offset": 0, 00:09:36.396 "data_size": 0 00:09:36.396 } 00:09:36.396 ] 00:09:36.396 }' 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.396 04:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 [2024-11-21 04:55:53.331081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.656 BaseBdev2 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.656 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 [ 00:09:36.656 { 00:09:36.656 "name": "BaseBdev2", 00:09:36.656 "aliases": [ 00:09:36.656 "9d004ae6-2f3d-482e-8cae-8b49d7ce834a" 00:09:36.656 ], 00:09:36.656 "product_name": "Malloc disk", 00:09:36.656 "block_size": 512, 00:09:36.656 "num_blocks": 65536, 00:09:36.656 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:36.656 "assigned_rate_limits": { 00:09:36.656 "rw_ios_per_sec": 0, 00:09:36.656 "rw_mbytes_per_sec": 0, 00:09:36.656 "r_mbytes_per_sec": 0, 00:09:36.656 "w_mbytes_per_sec": 0 00:09:36.656 }, 00:09:36.656 "claimed": true, 00:09:36.656 "claim_type": "exclusive_write", 00:09:36.656 "zoned": false, 00:09:36.656 "supported_io_types": { 00:09:36.656 "read": true, 00:09:36.656 "write": true, 00:09:36.656 "unmap": true, 00:09:36.656 "flush": true, 00:09:36.656 "reset": true, 00:09:36.656 "nvme_admin": false, 00:09:36.656 "nvme_io": false, 00:09:36.656 "nvme_io_md": false, 00:09:36.656 "write_zeroes": true, 00:09:36.656 "zcopy": true, 00:09:36.656 "get_zone_info": false, 00:09:36.656 "zone_management": false, 00:09:36.656 "zone_append": false, 00:09:36.656 "compare": false, 00:09:36.656 "compare_and_write": false, 00:09:36.656 "abort": true, 00:09:36.656 "seek_hole": false, 00:09:36.656 "seek_data": false, 00:09:36.656 "copy": true, 00:09:36.656 "nvme_iov_md": false 00:09:36.656 }, 00:09:36.656 "memory_domains": [ 00:09:36.656 { 00:09:36.656 "dma_device_id": "system", 00:09:36.656 "dma_device_type": 1 00:09:36.656 }, 00:09:36.656 { 00:09:36.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.656 "dma_device_type": 2 00:09:36.656 } 00:09:36.657 ], 00:09:36.657 "driver_specific": {} 00:09:36.657 } 00:09:36.657 ] 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.657 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.916 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.916 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.916 "name": "Existed_Raid", 00:09:36.916 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:36.916 "strip_size_kb": 64, 00:09:36.916 "state": "configuring", 00:09:36.916 "raid_level": "raid0", 00:09:36.916 "superblock": true, 00:09:36.916 "num_base_bdevs": 4, 00:09:36.916 "num_base_bdevs_discovered": 2, 00:09:36.916 "num_base_bdevs_operational": 4, 00:09:36.916 "base_bdevs_list": [ 00:09:36.916 { 00:09:36.916 "name": "BaseBdev1", 00:09:36.916 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:36.916 "is_configured": true, 00:09:36.916 "data_offset": 2048, 00:09:36.916 "data_size": 63488 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "name": "BaseBdev2", 00:09:36.916 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:36.916 "is_configured": true, 00:09:36.916 "data_offset": 2048, 00:09:36.916 "data_size": 63488 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "name": "BaseBdev3", 00:09:36.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.916 "is_configured": false, 00:09:36.916 "data_offset": 0, 00:09:36.916 "data_size": 0 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "name": "BaseBdev4", 00:09:36.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.916 "is_configured": false, 00:09:36.916 "data_offset": 0, 00:09:36.916 "data_size": 0 00:09:36.916 } 00:09:36.916 ] 00:09:36.917 }' 00:09:36.917 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.917 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.176 [2024-11-21 04:55:53.821503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.176 BaseBdev3 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.176 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.176 [ 00:09:37.176 { 00:09:37.176 "name": "BaseBdev3", 00:09:37.176 "aliases": [ 00:09:37.176 "99633978-d3f6-4437-ba69-7c41d07ecaef" 00:09:37.176 ], 00:09:37.176 "product_name": "Malloc disk", 00:09:37.176 "block_size": 512, 00:09:37.176 "num_blocks": 65536, 00:09:37.176 "uuid": "99633978-d3f6-4437-ba69-7c41d07ecaef", 00:09:37.176 "assigned_rate_limits": { 00:09:37.176 "rw_ios_per_sec": 0, 00:09:37.176 "rw_mbytes_per_sec": 0, 00:09:37.176 "r_mbytes_per_sec": 0, 00:09:37.176 "w_mbytes_per_sec": 0 00:09:37.176 }, 00:09:37.176 "claimed": true, 00:09:37.176 "claim_type": "exclusive_write", 00:09:37.177 "zoned": false, 00:09:37.177 "supported_io_types": { 00:09:37.177 "read": true, 00:09:37.177 "write": true, 00:09:37.177 "unmap": true, 00:09:37.177 "flush": true, 00:09:37.177 "reset": true, 00:09:37.177 "nvme_admin": false, 00:09:37.177 "nvme_io": false, 00:09:37.177 "nvme_io_md": false, 00:09:37.177 "write_zeroes": true, 00:09:37.177 "zcopy": true, 00:09:37.177 "get_zone_info": false, 00:09:37.177 "zone_management": false, 00:09:37.177 "zone_append": false, 00:09:37.177 "compare": false, 00:09:37.177 "compare_and_write": false, 00:09:37.177 "abort": true, 00:09:37.177 "seek_hole": false, 00:09:37.177 "seek_data": false, 00:09:37.177 "copy": true, 00:09:37.177 "nvme_iov_md": false 00:09:37.177 }, 00:09:37.177 "memory_domains": [ 00:09:37.177 { 00:09:37.177 "dma_device_id": "system", 00:09:37.177 "dma_device_type": 1 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.177 "dma_device_type": 2 00:09:37.177 } 00:09:37.177 ], 00:09:37.177 "driver_specific": {} 00:09:37.177 } 00:09:37.177 ] 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.177 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.177 "name": "Existed_Raid", 00:09:37.177 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:37.177 "strip_size_kb": 64, 00:09:37.177 "state": "configuring", 00:09:37.177 "raid_level": "raid0", 00:09:37.177 "superblock": true, 00:09:37.177 "num_base_bdevs": 4, 00:09:37.177 "num_base_bdevs_discovered": 3, 00:09:37.177 "num_base_bdevs_operational": 4, 00:09:37.177 "base_bdevs_list": [ 00:09:37.177 { 00:09:37.177 "name": "BaseBdev1", 00:09:37.177 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "name": "BaseBdev2", 00:09:37.177 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "name": "BaseBdev3", 00:09:37.177 "uuid": "99633978-d3f6-4437-ba69-7c41d07ecaef", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "name": "BaseBdev4", 00:09:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.177 "is_configured": false, 00:09:37.177 "data_offset": 0, 00:09:37.177 "data_size": 0 00:09:37.177 } 00:09:37.177 ] 00:09:37.177 }' 00:09:37.436 04:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.436 04:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 [2024-11-21 04:55:54.315998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:37.696 [2024-11-21 04:55:54.316328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:37.696 [2024-11-21 04:55:54.316396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:37.696 BaseBdev4 00:09:37.696 [2024-11-21 04:55:54.316709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:37.696 [2024-11-21 04:55:54.316889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:37.696 [2024-11-21 04:55:54.316957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:37.696 [2024-11-21 04:55:54.317196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.696 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 [ 00:09:37.696 { 00:09:37.696 "name": "BaseBdev4", 00:09:37.696 "aliases": [ 00:09:37.696 "1f08cacd-f652-4cc2-8248-64ecada3bba9" 00:09:37.696 ], 00:09:37.696 "product_name": "Malloc disk", 00:09:37.696 "block_size": 512, 00:09:37.696 "num_blocks": 65536, 00:09:37.696 "uuid": "1f08cacd-f652-4cc2-8248-64ecada3bba9", 00:09:37.696 "assigned_rate_limits": { 00:09:37.696 "rw_ios_per_sec": 0, 00:09:37.696 "rw_mbytes_per_sec": 0, 00:09:37.696 "r_mbytes_per_sec": 0, 00:09:37.696 "w_mbytes_per_sec": 0 00:09:37.696 }, 00:09:37.696 "claimed": true, 00:09:37.696 "claim_type": "exclusive_write", 00:09:37.696 "zoned": false, 00:09:37.696 "supported_io_types": { 00:09:37.696 "read": true, 00:09:37.696 "write": true, 00:09:37.697 "unmap": true, 00:09:37.697 "flush": true, 00:09:37.697 "reset": true, 00:09:37.697 "nvme_admin": false, 00:09:37.697 "nvme_io": false, 00:09:37.697 "nvme_io_md": false, 00:09:37.697 "write_zeroes": true, 00:09:37.697 "zcopy": true, 00:09:37.697 "get_zone_info": false, 00:09:37.697 "zone_management": false, 00:09:37.697 "zone_append": false, 00:09:37.697 "compare": false, 00:09:37.697 "compare_and_write": false, 00:09:37.697 "abort": true, 00:09:37.697 "seek_hole": false, 00:09:37.697 "seek_data": false, 00:09:37.697 "copy": true, 00:09:37.697 "nvme_iov_md": false 00:09:37.697 }, 00:09:37.697 "memory_domains": [ 00:09:37.697 { 00:09:37.697 "dma_device_id": "system", 00:09:37.697 "dma_device_type": 1 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.697 "dma_device_type": 2 00:09:37.697 } 00:09:37.697 ], 00:09:37.697 "driver_specific": {} 00:09:37.697 } 00:09:37.697 ] 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.697 "name": "Existed_Raid", 00:09:37.697 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:37.697 "strip_size_kb": 64, 00:09:37.697 "state": "online", 00:09:37.697 "raid_level": "raid0", 00:09:37.697 "superblock": true, 00:09:37.697 "num_base_bdevs": 4, 00:09:37.697 "num_base_bdevs_discovered": 4, 00:09:37.697 "num_base_bdevs_operational": 4, 00:09:37.697 "base_bdevs_list": [ 00:09:37.697 { 00:09:37.697 "name": "BaseBdev1", 00:09:37.697 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:37.697 "is_configured": true, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "name": "BaseBdev2", 00:09:37.697 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:37.697 "is_configured": true, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "name": "BaseBdev3", 00:09:37.697 "uuid": "99633978-d3f6-4437-ba69-7c41d07ecaef", 00:09:37.697 "is_configured": true, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "name": "BaseBdev4", 00:09:37.697 "uuid": "1f08cacd-f652-4cc2-8248-64ecada3bba9", 00:09:37.697 "is_configured": true, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 } 00:09:37.697 ] 00:09:37.697 }' 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.697 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 [2024-11-21 04:55:54.815585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.267 "name": "Existed_Raid", 00:09:38.267 "aliases": [ 00:09:38.267 "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c" 00:09:38.267 ], 00:09:38.267 "product_name": "Raid Volume", 00:09:38.267 "block_size": 512, 00:09:38.267 "num_blocks": 253952, 00:09:38.267 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:38.267 "assigned_rate_limits": { 00:09:38.267 "rw_ios_per_sec": 0, 00:09:38.267 "rw_mbytes_per_sec": 0, 00:09:38.267 "r_mbytes_per_sec": 0, 00:09:38.267 "w_mbytes_per_sec": 0 00:09:38.267 }, 00:09:38.267 "claimed": false, 00:09:38.267 "zoned": false, 00:09:38.267 "supported_io_types": { 00:09:38.267 "read": true, 00:09:38.267 "write": true, 00:09:38.267 "unmap": true, 00:09:38.267 "flush": true, 00:09:38.267 "reset": true, 00:09:38.267 "nvme_admin": false, 00:09:38.267 "nvme_io": false, 00:09:38.267 "nvme_io_md": false, 00:09:38.267 "write_zeroes": true, 00:09:38.267 "zcopy": false, 00:09:38.267 "get_zone_info": false, 00:09:38.267 "zone_management": false, 00:09:38.267 "zone_append": false, 00:09:38.267 "compare": false, 00:09:38.267 "compare_and_write": false, 00:09:38.267 "abort": false, 00:09:38.267 "seek_hole": false, 00:09:38.267 "seek_data": false, 00:09:38.267 "copy": false, 00:09:38.267 "nvme_iov_md": false 00:09:38.267 }, 00:09:38.267 "memory_domains": [ 00:09:38.267 { 00:09:38.267 "dma_device_id": "system", 00:09:38.267 "dma_device_type": 1 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.267 "dma_device_type": 2 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "system", 00:09:38.267 "dma_device_type": 1 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.267 "dma_device_type": 2 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "system", 00:09:38.267 "dma_device_type": 1 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.267 "dma_device_type": 2 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "system", 00:09:38.267 "dma_device_type": 1 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.267 "dma_device_type": 2 00:09:38.267 } 00:09:38.267 ], 00:09:38.267 "driver_specific": { 00:09:38.267 "raid": { 00:09:38.267 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:38.267 "strip_size_kb": 64, 00:09:38.267 "state": "online", 00:09:38.267 "raid_level": "raid0", 00:09:38.267 "superblock": true, 00:09:38.267 "num_base_bdevs": 4, 00:09:38.267 "num_base_bdevs_discovered": 4, 00:09:38.267 "num_base_bdevs_operational": 4, 00:09:38.267 "base_bdevs_list": [ 00:09:38.267 { 00:09:38.267 "name": "BaseBdev1", 00:09:38.267 "uuid": "030e99e2-f4a4-4422-b72d-fb96c612f4fa", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "name": "BaseBdev2", 00:09:38.267 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "name": "BaseBdev3", 00:09:38.267 "uuid": "99633978-d3f6-4437-ba69-7c41d07ecaef", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "name": "BaseBdev4", 00:09:38.267 "uuid": "1f08cacd-f652-4cc2-8248-64ecada3bba9", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 } 00:09:38.267 ] 00:09:38.267 } 00:09:38.267 } 00:09:38.267 }' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.267 BaseBdev2 00:09:38.267 BaseBdev3 00:09:38.267 BaseBdev4' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.267 04:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 [2024-11-21 04:55:55.114769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.527 [2024-11-21 04:55:55.114854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.527 [2024-11-21 04:55:55.114942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.527 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.528 "name": "Existed_Raid", 00:09:38.528 "uuid": "49fd8ce1-deb2-4ee6-a512-55b24dbb9e7c", 00:09:38.528 "strip_size_kb": 64, 00:09:38.528 "state": "offline", 00:09:38.528 "raid_level": "raid0", 00:09:38.528 "superblock": true, 00:09:38.528 "num_base_bdevs": 4, 00:09:38.528 "num_base_bdevs_discovered": 3, 00:09:38.528 "num_base_bdevs_operational": 3, 00:09:38.528 "base_bdevs_list": [ 00:09:38.528 { 00:09:38.528 "name": null, 00:09:38.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.528 "is_configured": false, 00:09:38.528 "data_offset": 0, 00:09:38.528 "data_size": 63488 00:09:38.528 }, 00:09:38.528 { 00:09:38.528 "name": "BaseBdev2", 00:09:38.528 "uuid": "9d004ae6-2f3d-482e-8cae-8b49d7ce834a", 00:09:38.528 "is_configured": true, 00:09:38.528 "data_offset": 2048, 00:09:38.528 "data_size": 63488 00:09:38.528 }, 00:09:38.528 { 00:09:38.528 "name": "BaseBdev3", 00:09:38.528 "uuid": "99633978-d3f6-4437-ba69-7c41d07ecaef", 00:09:38.528 "is_configured": true, 00:09:38.528 "data_offset": 2048, 00:09:38.528 "data_size": 63488 00:09:38.528 }, 00:09:38.528 { 00:09:38.528 "name": "BaseBdev4", 00:09:38.528 "uuid": "1f08cacd-f652-4cc2-8248-64ecada3bba9", 00:09:38.528 "is_configured": true, 00:09:38.528 "data_offset": 2048, 00:09:38.528 "data_size": 63488 00:09:38.528 } 00:09:38.528 ] 00:09:38.528 }' 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.528 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 [2024-11-21 04:55:55.649539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 [2024-11-21 04:55:55.716660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 [2024-11-21 04:55:55.771424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:39.096 [2024-11-21 04:55:55.771473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.096 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 BaseBdev2 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 [ 00:09:39.356 { 00:09:39.356 "name": "BaseBdev2", 00:09:39.356 "aliases": [ 00:09:39.356 "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8" 00:09:39.356 ], 00:09:39.356 "product_name": "Malloc disk", 00:09:39.356 "block_size": 512, 00:09:39.356 "num_blocks": 65536, 00:09:39.356 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:39.356 "assigned_rate_limits": { 00:09:39.356 "rw_ios_per_sec": 0, 00:09:39.356 "rw_mbytes_per_sec": 0, 00:09:39.356 "r_mbytes_per_sec": 0, 00:09:39.356 "w_mbytes_per_sec": 0 00:09:39.356 }, 00:09:39.356 "claimed": false, 00:09:39.356 "zoned": false, 00:09:39.356 "supported_io_types": { 00:09:39.356 "read": true, 00:09:39.356 "write": true, 00:09:39.356 "unmap": true, 00:09:39.356 "flush": true, 00:09:39.356 "reset": true, 00:09:39.356 "nvme_admin": false, 00:09:39.356 "nvme_io": false, 00:09:39.356 "nvme_io_md": false, 00:09:39.356 "write_zeroes": true, 00:09:39.356 "zcopy": true, 00:09:39.356 "get_zone_info": false, 00:09:39.356 "zone_management": false, 00:09:39.356 "zone_append": false, 00:09:39.356 "compare": false, 00:09:39.356 "compare_and_write": false, 00:09:39.356 "abort": true, 00:09:39.356 "seek_hole": false, 00:09:39.356 "seek_data": false, 00:09:39.356 "copy": true, 00:09:39.356 "nvme_iov_md": false 00:09:39.356 }, 00:09:39.356 "memory_domains": [ 00:09:39.356 { 00:09:39.356 "dma_device_id": "system", 00:09:39.356 "dma_device_type": 1 00:09:39.356 }, 00:09:39.356 { 00:09:39.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.356 "dma_device_type": 2 00:09:39.356 } 00:09:39.356 ], 00:09:39.356 "driver_specific": {} 00:09:39.356 } 00:09:39.356 ] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 BaseBdev3 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 [ 00:09:39.356 { 00:09:39.356 "name": "BaseBdev3", 00:09:39.356 "aliases": [ 00:09:39.356 "a112805f-1386-41e3-b152-22152089dc7b" 00:09:39.356 ], 00:09:39.356 "product_name": "Malloc disk", 00:09:39.356 "block_size": 512, 00:09:39.356 "num_blocks": 65536, 00:09:39.356 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:39.356 "assigned_rate_limits": { 00:09:39.356 "rw_ios_per_sec": 0, 00:09:39.356 "rw_mbytes_per_sec": 0, 00:09:39.356 "r_mbytes_per_sec": 0, 00:09:39.356 "w_mbytes_per_sec": 0 00:09:39.356 }, 00:09:39.356 "claimed": false, 00:09:39.356 "zoned": false, 00:09:39.356 "supported_io_types": { 00:09:39.356 "read": true, 00:09:39.356 "write": true, 00:09:39.356 "unmap": true, 00:09:39.356 "flush": true, 00:09:39.356 "reset": true, 00:09:39.356 "nvme_admin": false, 00:09:39.356 "nvme_io": false, 00:09:39.356 "nvme_io_md": false, 00:09:39.356 "write_zeroes": true, 00:09:39.356 "zcopy": true, 00:09:39.356 "get_zone_info": false, 00:09:39.356 "zone_management": false, 00:09:39.356 "zone_append": false, 00:09:39.356 "compare": false, 00:09:39.356 "compare_and_write": false, 00:09:39.356 "abort": true, 00:09:39.356 "seek_hole": false, 00:09:39.356 "seek_data": false, 00:09:39.356 "copy": true, 00:09:39.356 "nvme_iov_md": false 00:09:39.356 }, 00:09:39.356 "memory_domains": [ 00:09:39.356 { 00:09:39.356 "dma_device_id": "system", 00:09:39.356 "dma_device_type": 1 00:09:39.356 }, 00:09:39.356 { 00:09:39.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.356 "dma_device_type": 2 00:09:39.356 } 00:09:39.356 ], 00:09:39.356 "driver_specific": {} 00:09:39.356 } 00:09:39.356 ] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 BaseBdev4 00:09:39.356 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.357 [ 00:09:39.357 { 00:09:39.357 "name": "BaseBdev4", 00:09:39.357 "aliases": [ 00:09:39.357 "ad1e496a-6393-4b6f-a443-8dbe8a8e3365" 00:09:39.357 ], 00:09:39.357 "product_name": "Malloc disk", 00:09:39.357 "block_size": 512, 00:09:39.357 "num_blocks": 65536, 00:09:39.357 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:39.357 "assigned_rate_limits": { 00:09:39.357 "rw_ios_per_sec": 0, 00:09:39.357 "rw_mbytes_per_sec": 0, 00:09:39.357 "r_mbytes_per_sec": 0, 00:09:39.357 "w_mbytes_per_sec": 0 00:09:39.357 }, 00:09:39.357 "claimed": false, 00:09:39.357 "zoned": false, 00:09:39.357 "supported_io_types": { 00:09:39.357 "read": true, 00:09:39.357 "write": true, 00:09:39.357 "unmap": true, 00:09:39.357 "flush": true, 00:09:39.357 "reset": true, 00:09:39.357 "nvme_admin": false, 00:09:39.357 "nvme_io": false, 00:09:39.357 "nvme_io_md": false, 00:09:39.357 "write_zeroes": true, 00:09:39.357 "zcopy": true, 00:09:39.357 "get_zone_info": false, 00:09:39.357 "zone_management": false, 00:09:39.357 "zone_append": false, 00:09:39.357 "compare": false, 00:09:39.357 "compare_and_write": false, 00:09:39.357 "abort": true, 00:09:39.357 "seek_hole": false, 00:09:39.357 "seek_data": false, 00:09:39.357 "copy": true, 00:09:39.357 "nvme_iov_md": false 00:09:39.357 }, 00:09:39.357 "memory_domains": [ 00:09:39.357 { 00:09:39.357 "dma_device_id": "system", 00:09:39.357 "dma_device_type": 1 00:09:39.357 }, 00:09:39.357 { 00:09:39.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.357 "dma_device_type": 2 00:09:39.357 } 00:09:39.357 ], 00:09:39.357 "driver_specific": {} 00:09:39.357 } 00:09:39.357 ] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.357 [2024-11-21 04:55:55.990500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.357 [2024-11-21 04:55:55.990635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.357 [2024-11-21 04:55:55.990675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.357 [2024-11-21 04:55:55.992512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.357 [2024-11-21 04:55:55.992601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.357 04:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.357 "name": "Existed_Raid", 00:09:39.357 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:39.357 "strip_size_kb": 64, 00:09:39.357 "state": "configuring", 00:09:39.357 "raid_level": "raid0", 00:09:39.357 "superblock": true, 00:09:39.357 "num_base_bdevs": 4, 00:09:39.357 "num_base_bdevs_discovered": 3, 00:09:39.357 "num_base_bdevs_operational": 4, 00:09:39.357 "base_bdevs_list": [ 00:09:39.357 { 00:09:39.357 "name": "BaseBdev1", 00:09:39.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.357 "is_configured": false, 00:09:39.357 "data_offset": 0, 00:09:39.357 "data_size": 0 00:09:39.357 }, 00:09:39.357 { 00:09:39.357 "name": "BaseBdev2", 00:09:39.357 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:39.357 "is_configured": true, 00:09:39.357 "data_offset": 2048, 00:09:39.357 "data_size": 63488 00:09:39.357 }, 00:09:39.357 { 00:09:39.357 "name": "BaseBdev3", 00:09:39.357 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:39.357 "is_configured": true, 00:09:39.357 "data_offset": 2048, 00:09:39.357 "data_size": 63488 00:09:39.357 }, 00:09:39.357 { 00:09:39.357 "name": "BaseBdev4", 00:09:39.357 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:39.357 "is_configured": true, 00:09:39.357 "data_offset": 2048, 00:09:39.357 "data_size": 63488 00:09:39.357 } 00:09:39.357 ] 00:09:39.357 }' 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.357 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.926 [2024-11-21 04:55:56.457724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.926 "name": "Existed_Raid", 00:09:39.926 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:39.926 "strip_size_kb": 64, 00:09:39.926 "state": "configuring", 00:09:39.926 "raid_level": "raid0", 00:09:39.926 "superblock": true, 00:09:39.926 "num_base_bdevs": 4, 00:09:39.926 "num_base_bdevs_discovered": 2, 00:09:39.926 "num_base_bdevs_operational": 4, 00:09:39.926 "base_bdevs_list": [ 00:09:39.926 { 00:09:39.926 "name": "BaseBdev1", 00:09:39.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.926 "is_configured": false, 00:09:39.926 "data_offset": 0, 00:09:39.926 "data_size": 0 00:09:39.926 }, 00:09:39.926 { 00:09:39.926 "name": null, 00:09:39.926 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:39.926 "is_configured": false, 00:09:39.926 "data_offset": 0, 00:09:39.926 "data_size": 63488 00:09:39.926 }, 00:09:39.926 { 00:09:39.926 "name": "BaseBdev3", 00:09:39.926 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:39.926 "is_configured": true, 00:09:39.926 "data_offset": 2048, 00:09:39.926 "data_size": 63488 00:09:39.926 }, 00:09:39.926 { 00:09:39.926 "name": "BaseBdev4", 00:09:39.926 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:39.926 "is_configured": true, 00:09:39.926 "data_offset": 2048, 00:09:39.926 "data_size": 63488 00:09:39.926 } 00:09:39.926 ] 00:09:39.926 }' 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.926 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.185 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.185 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.185 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.185 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.445 [2024-11-21 04:55:56.971598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.445 BaseBdev1 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.445 04:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.445 [ 00:09:40.445 { 00:09:40.445 "name": "BaseBdev1", 00:09:40.445 "aliases": [ 00:09:40.445 "3ba67a29-2395-473d-a7d5-148875069664" 00:09:40.445 ], 00:09:40.445 "product_name": "Malloc disk", 00:09:40.445 "block_size": 512, 00:09:40.445 "num_blocks": 65536, 00:09:40.445 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:40.445 "assigned_rate_limits": { 00:09:40.445 "rw_ios_per_sec": 0, 00:09:40.445 "rw_mbytes_per_sec": 0, 00:09:40.445 "r_mbytes_per_sec": 0, 00:09:40.445 "w_mbytes_per_sec": 0 00:09:40.445 }, 00:09:40.445 "claimed": true, 00:09:40.445 "claim_type": "exclusive_write", 00:09:40.445 "zoned": false, 00:09:40.445 "supported_io_types": { 00:09:40.445 "read": true, 00:09:40.445 "write": true, 00:09:40.445 "unmap": true, 00:09:40.445 "flush": true, 00:09:40.445 "reset": true, 00:09:40.445 "nvme_admin": false, 00:09:40.445 "nvme_io": false, 00:09:40.445 "nvme_io_md": false, 00:09:40.445 "write_zeroes": true, 00:09:40.445 "zcopy": true, 00:09:40.445 "get_zone_info": false, 00:09:40.445 "zone_management": false, 00:09:40.445 "zone_append": false, 00:09:40.445 "compare": false, 00:09:40.445 "compare_and_write": false, 00:09:40.445 "abort": true, 00:09:40.445 "seek_hole": false, 00:09:40.445 "seek_data": false, 00:09:40.445 "copy": true, 00:09:40.445 "nvme_iov_md": false 00:09:40.445 }, 00:09:40.445 "memory_domains": [ 00:09:40.445 { 00:09:40.445 "dma_device_id": "system", 00:09:40.445 "dma_device_type": 1 00:09:40.445 }, 00:09:40.445 { 00:09:40.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.445 "dma_device_type": 2 00:09:40.445 } 00:09:40.445 ], 00:09:40.445 "driver_specific": {} 00:09:40.445 } 00:09:40.445 ] 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.445 "name": "Existed_Raid", 00:09:40.445 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:40.445 "strip_size_kb": 64, 00:09:40.445 "state": "configuring", 00:09:40.445 "raid_level": "raid0", 00:09:40.445 "superblock": true, 00:09:40.445 "num_base_bdevs": 4, 00:09:40.445 "num_base_bdevs_discovered": 3, 00:09:40.445 "num_base_bdevs_operational": 4, 00:09:40.445 "base_bdevs_list": [ 00:09:40.445 { 00:09:40.445 "name": "BaseBdev1", 00:09:40.445 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:40.445 "is_configured": true, 00:09:40.445 "data_offset": 2048, 00:09:40.445 "data_size": 63488 00:09:40.445 }, 00:09:40.445 { 00:09:40.445 "name": null, 00:09:40.445 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:40.445 "is_configured": false, 00:09:40.445 "data_offset": 0, 00:09:40.445 "data_size": 63488 00:09:40.445 }, 00:09:40.445 { 00:09:40.445 "name": "BaseBdev3", 00:09:40.445 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:40.445 "is_configured": true, 00:09:40.445 "data_offset": 2048, 00:09:40.445 "data_size": 63488 00:09:40.445 }, 00:09:40.445 { 00:09:40.445 "name": "BaseBdev4", 00:09:40.445 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:40.445 "is_configured": true, 00:09:40.445 "data_offset": 2048, 00:09:40.445 "data_size": 63488 00:09:40.445 } 00:09:40.445 ] 00:09:40.445 }' 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.445 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 [2024-11-21 04:55:57.478777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.013 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.014 "name": "Existed_Raid", 00:09:41.014 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:41.014 "strip_size_kb": 64, 00:09:41.014 "state": "configuring", 00:09:41.014 "raid_level": "raid0", 00:09:41.014 "superblock": true, 00:09:41.014 "num_base_bdevs": 4, 00:09:41.014 "num_base_bdevs_discovered": 2, 00:09:41.014 "num_base_bdevs_operational": 4, 00:09:41.014 "base_bdevs_list": [ 00:09:41.014 { 00:09:41.014 "name": "BaseBdev1", 00:09:41.014 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:41.014 "is_configured": true, 00:09:41.014 "data_offset": 2048, 00:09:41.014 "data_size": 63488 00:09:41.014 }, 00:09:41.014 { 00:09:41.014 "name": null, 00:09:41.014 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:41.014 "is_configured": false, 00:09:41.014 "data_offset": 0, 00:09:41.014 "data_size": 63488 00:09:41.014 }, 00:09:41.014 { 00:09:41.014 "name": null, 00:09:41.014 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:41.014 "is_configured": false, 00:09:41.014 "data_offset": 0, 00:09:41.014 "data_size": 63488 00:09:41.014 }, 00:09:41.014 { 00:09:41.014 "name": "BaseBdev4", 00:09:41.014 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:41.014 "is_configured": true, 00:09:41.014 "data_offset": 2048, 00:09:41.014 "data_size": 63488 00:09:41.014 } 00:09:41.014 ] 00:09:41.014 }' 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.014 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 [2024-11-21 04:55:57.942011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.274 "name": "Existed_Raid", 00:09:41.274 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:41.274 "strip_size_kb": 64, 00:09:41.274 "state": "configuring", 00:09:41.274 "raid_level": "raid0", 00:09:41.274 "superblock": true, 00:09:41.274 "num_base_bdevs": 4, 00:09:41.274 "num_base_bdevs_discovered": 3, 00:09:41.274 "num_base_bdevs_operational": 4, 00:09:41.274 "base_bdevs_list": [ 00:09:41.274 { 00:09:41.274 "name": "BaseBdev1", 00:09:41.274 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:41.274 "is_configured": true, 00:09:41.274 "data_offset": 2048, 00:09:41.274 "data_size": 63488 00:09:41.274 }, 00:09:41.274 { 00:09:41.274 "name": null, 00:09:41.274 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:41.274 "is_configured": false, 00:09:41.274 "data_offset": 0, 00:09:41.274 "data_size": 63488 00:09:41.274 }, 00:09:41.274 { 00:09:41.274 "name": "BaseBdev3", 00:09:41.274 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:41.274 "is_configured": true, 00:09:41.274 "data_offset": 2048, 00:09:41.274 "data_size": 63488 00:09:41.274 }, 00:09:41.274 { 00:09:41.274 "name": "BaseBdev4", 00:09:41.274 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:41.274 "is_configured": true, 00:09:41.274 "data_offset": 2048, 00:09:41.274 "data_size": 63488 00:09:41.274 } 00:09:41.274 ] 00:09:41.274 }' 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.274 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.842 [2024-11-21 04:55:58.457157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.842 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.843 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.843 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.843 "name": "Existed_Raid", 00:09:41.843 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:41.843 "strip_size_kb": 64, 00:09:41.843 "state": "configuring", 00:09:41.843 "raid_level": "raid0", 00:09:41.843 "superblock": true, 00:09:41.843 "num_base_bdevs": 4, 00:09:41.843 "num_base_bdevs_discovered": 2, 00:09:41.843 "num_base_bdevs_operational": 4, 00:09:41.843 "base_bdevs_list": [ 00:09:41.843 { 00:09:41.843 "name": null, 00:09:41.843 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:41.843 "is_configured": false, 00:09:41.843 "data_offset": 0, 00:09:41.843 "data_size": 63488 00:09:41.843 }, 00:09:41.843 { 00:09:41.843 "name": null, 00:09:41.843 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:41.843 "is_configured": false, 00:09:41.843 "data_offset": 0, 00:09:41.843 "data_size": 63488 00:09:41.843 }, 00:09:41.843 { 00:09:41.843 "name": "BaseBdev3", 00:09:41.843 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:41.843 "is_configured": true, 00:09:41.843 "data_offset": 2048, 00:09:41.843 "data_size": 63488 00:09:41.843 }, 00:09:41.843 { 00:09:41.843 "name": "BaseBdev4", 00:09:41.843 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:41.843 "is_configured": true, 00:09:41.843 "data_offset": 2048, 00:09:41.843 "data_size": 63488 00:09:41.843 } 00:09:41.843 ] 00:09:41.843 }' 00:09:41.843 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.843 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.409 [2024-11-21 04:55:58.914924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.409 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.410 "name": "Existed_Raid", 00:09:42.410 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:42.410 "strip_size_kb": 64, 00:09:42.410 "state": "configuring", 00:09:42.410 "raid_level": "raid0", 00:09:42.410 "superblock": true, 00:09:42.410 "num_base_bdevs": 4, 00:09:42.410 "num_base_bdevs_discovered": 3, 00:09:42.410 "num_base_bdevs_operational": 4, 00:09:42.410 "base_bdevs_list": [ 00:09:42.410 { 00:09:42.410 "name": null, 00:09:42.410 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:42.410 "is_configured": false, 00:09:42.410 "data_offset": 0, 00:09:42.410 "data_size": 63488 00:09:42.410 }, 00:09:42.410 { 00:09:42.410 "name": "BaseBdev2", 00:09:42.410 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:42.410 "is_configured": true, 00:09:42.410 "data_offset": 2048, 00:09:42.410 "data_size": 63488 00:09:42.410 }, 00:09:42.410 { 00:09:42.410 "name": "BaseBdev3", 00:09:42.410 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:42.410 "is_configured": true, 00:09:42.410 "data_offset": 2048, 00:09:42.410 "data_size": 63488 00:09:42.410 }, 00:09:42.410 { 00:09:42.410 "name": "BaseBdev4", 00:09:42.410 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:42.410 "is_configured": true, 00:09:42.410 "data_offset": 2048, 00:09:42.410 "data_size": 63488 00:09:42.410 } 00:09:42.410 ] 00:09:42.410 }' 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.410 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:42.683 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ba67a29-2395-473d-a7d5-148875069664 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.947 [2024-11-21 04:55:59.453053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:42.947 NewBaseBdev 00:09:42.947 [2024-11-21 04:55:59.453411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:42.947 [2024-11-21 04:55:59.453432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:42.947 [2024-11-21 04:55:59.453703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:42.947 [2024-11-21 04:55:59.453810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:42.947 [2024-11-21 04:55:59.453822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:42.947 [2024-11-21 04:55:59.453915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.947 [ 00:09:42.947 { 00:09:42.947 "name": "NewBaseBdev", 00:09:42.947 "aliases": [ 00:09:42.947 "3ba67a29-2395-473d-a7d5-148875069664" 00:09:42.947 ], 00:09:42.947 "product_name": "Malloc disk", 00:09:42.947 "block_size": 512, 00:09:42.947 "num_blocks": 65536, 00:09:42.947 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:42.947 "assigned_rate_limits": { 00:09:42.947 "rw_ios_per_sec": 0, 00:09:42.947 "rw_mbytes_per_sec": 0, 00:09:42.947 "r_mbytes_per_sec": 0, 00:09:42.947 "w_mbytes_per_sec": 0 00:09:42.947 }, 00:09:42.947 "claimed": true, 00:09:42.947 "claim_type": "exclusive_write", 00:09:42.947 "zoned": false, 00:09:42.947 "supported_io_types": { 00:09:42.947 "read": true, 00:09:42.947 "write": true, 00:09:42.947 "unmap": true, 00:09:42.947 "flush": true, 00:09:42.947 "reset": true, 00:09:42.947 "nvme_admin": false, 00:09:42.947 "nvme_io": false, 00:09:42.947 "nvme_io_md": false, 00:09:42.947 "write_zeroes": true, 00:09:42.947 "zcopy": true, 00:09:42.947 "get_zone_info": false, 00:09:42.947 "zone_management": false, 00:09:42.947 "zone_append": false, 00:09:42.947 "compare": false, 00:09:42.947 "compare_and_write": false, 00:09:42.947 "abort": true, 00:09:42.947 "seek_hole": false, 00:09:42.947 "seek_data": false, 00:09:42.947 "copy": true, 00:09:42.947 "nvme_iov_md": false 00:09:42.947 }, 00:09:42.947 "memory_domains": [ 00:09:42.947 { 00:09:42.947 "dma_device_id": "system", 00:09:42.947 "dma_device_type": 1 00:09:42.947 }, 00:09:42.947 { 00:09:42.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.947 "dma_device_type": 2 00:09:42.947 } 00:09:42.947 ], 00:09:42.947 "driver_specific": {} 00:09:42.947 } 00:09:42.947 ] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.947 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.947 "name": "Existed_Raid", 00:09:42.947 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:42.947 "strip_size_kb": 64, 00:09:42.947 "state": "online", 00:09:42.947 "raid_level": "raid0", 00:09:42.948 "superblock": true, 00:09:42.948 "num_base_bdevs": 4, 00:09:42.948 "num_base_bdevs_discovered": 4, 00:09:42.948 "num_base_bdevs_operational": 4, 00:09:42.948 "base_bdevs_list": [ 00:09:42.948 { 00:09:42.948 "name": "NewBaseBdev", 00:09:42.948 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 }, 00:09:42.948 { 00:09:42.948 "name": "BaseBdev2", 00:09:42.948 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 }, 00:09:42.948 { 00:09:42.948 "name": "BaseBdev3", 00:09:42.948 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 }, 00:09:42.948 { 00:09:42.948 "name": "BaseBdev4", 00:09:42.948 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 } 00:09:42.948 ] 00:09:42.948 }' 00:09:42.948 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.948 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.207 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.465 [2024-11-21 04:55:59.944586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.466 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.466 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.466 "name": "Existed_Raid", 00:09:43.466 "aliases": [ 00:09:43.466 "3934a29f-5099-4b30-b448-30520bd0d499" 00:09:43.466 ], 00:09:43.466 "product_name": "Raid Volume", 00:09:43.466 "block_size": 512, 00:09:43.466 "num_blocks": 253952, 00:09:43.466 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:43.466 "assigned_rate_limits": { 00:09:43.466 "rw_ios_per_sec": 0, 00:09:43.466 "rw_mbytes_per_sec": 0, 00:09:43.466 "r_mbytes_per_sec": 0, 00:09:43.466 "w_mbytes_per_sec": 0 00:09:43.466 }, 00:09:43.466 "claimed": false, 00:09:43.466 "zoned": false, 00:09:43.466 "supported_io_types": { 00:09:43.466 "read": true, 00:09:43.466 "write": true, 00:09:43.466 "unmap": true, 00:09:43.466 "flush": true, 00:09:43.466 "reset": true, 00:09:43.466 "nvme_admin": false, 00:09:43.466 "nvme_io": false, 00:09:43.466 "nvme_io_md": false, 00:09:43.466 "write_zeroes": true, 00:09:43.466 "zcopy": false, 00:09:43.466 "get_zone_info": false, 00:09:43.466 "zone_management": false, 00:09:43.466 "zone_append": false, 00:09:43.466 "compare": false, 00:09:43.466 "compare_and_write": false, 00:09:43.466 "abort": false, 00:09:43.466 "seek_hole": false, 00:09:43.466 "seek_data": false, 00:09:43.466 "copy": false, 00:09:43.466 "nvme_iov_md": false 00:09:43.466 }, 00:09:43.466 "memory_domains": [ 00:09:43.466 { 00:09:43.466 "dma_device_id": "system", 00:09:43.466 "dma_device_type": 1 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.466 "dma_device_type": 2 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "system", 00:09:43.466 "dma_device_type": 1 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.466 "dma_device_type": 2 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "system", 00:09:43.466 "dma_device_type": 1 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.466 "dma_device_type": 2 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "system", 00:09:43.466 "dma_device_type": 1 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.466 "dma_device_type": 2 00:09:43.466 } 00:09:43.466 ], 00:09:43.466 "driver_specific": { 00:09:43.466 "raid": { 00:09:43.466 "uuid": "3934a29f-5099-4b30-b448-30520bd0d499", 00:09:43.466 "strip_size_kb": 64, 00:09:43.466 "state": "online", 00:09:43.466 "raid_level": "raid0", 00:09:43.466 "superblock": true, 00:09:43.466 "num_base_bdevs": 4, 00:09:43.466 "num_base_bdevs_discovered": 4, 00:09:43.466 "num_base_bdevs_operational": 4, 00:09:43.466 "base_bdevs_list": [ 00:09:43.466 { 00:09:43.466 "name": "NewBaseBdev", 00:09:43.466 "uuid": "3ba67a29-2395-473d-a7d5-148875069664", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "name": "BaseBdev2", 00:09:43.466 "uuid": "fdd5658c-ff6e-4bb1-ac04-1bf49b27b0d8", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "name": "BaseBdev3", 00:09:43.466 "uuid": "a112805f-1386-41e3-b152-22152089dc7b", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "name": "BaseBdev4", 00:09:43.466 "uuid": "ad1e496a-6393-4b6f-a443-8dbe8a8e3365", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 } 00:09:43.466 ] 00:09:43.466 } 00:09:43.466 } 00:09:43.466 }' 00:09:43.466 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.466 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:43.466 BaseBdev2 00:09:43.466 BaseBdev3 00:09:43.466 BaseBdev4' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.466 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.725 [2024-11-21 04:56:00.207809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.725 [2024-11-21 04:56:00.207890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.725 [2024-11-21 04:56:00.208007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.725 [2024-11-21 04:56:00.208129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.725 [2024-11-21 04:56:00.208178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81162 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81162 ']' 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81162 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81162 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.725 killing process with pid 81162 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81162' 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81162 00:09:43.725 [2024-11-21 04:56:00.255337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.725 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81162 00:09:43.725 [2024-11-21 04:56:00.296173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.984 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.984 00:09:43.984 real 0m9.525s 00:09:43.984 user 0m16.274s 00:09:43.984 sys 0m2.055s 00:09:43.984 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.984 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.984 ************************************ 00:09:43.984 END TEST raid_state_function_test_sb 00:09:43.984 ************************************ 00:09:43.984 04:56:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:43.984 04:56:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.984 04:56:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.984 04:56:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.984 ************************************ 00:09:43.984 START TEST raid_superblock_test 00:09:43.984 ************************************ 00:09:43.984 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:43.984 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:43.984 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:43.984 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:43.984 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81816 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81816 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81816 ']' 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.985 04:56:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.985 [2024-11-21 04:56:00.669766] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:43.985 [2024-11-21 04:56:00.669963] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81816 ] 00:09:44.244 [2024-11-21 04:56:00.841249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.244 [2024-11-21 04:56:00.867345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.244 [2024-11-21 04:56:00.909780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.244 [2024-11-21 04:56:00.909903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.813 malloc1 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.813 [2024-11-21 04:56:01.524203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.813 [2024-11-21 04:56:01.524321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.813 [2024-11-21 04:56:01.524374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.813 [2024-11-21 04:56:01.524435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.813 [2024-11-21 04:56:01.526561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.813 [2024-11-21 04:56:01.526649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.813 pt1 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.813 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.072 malloc2 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.072 [2024-11-21 04:56:01.557347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.072 [2024-11-21 04:56:01.557403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.072 [2024-11-21 04:56:01.557420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:45.072 [2024-11-21 04:56:01.557430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.072 [2024-11-21 04:56:01.559700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.072 [2024-11-21 04:56:01.559799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.072 pt2 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.072 malloc3 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.072 [2024-11-21 04:56:01.586001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.072 [2024-11-21 04:56:01.586111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.072 [2024-11-21 04:56:01.586149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:45.072 [2024-11-21 04:56:01.586180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.072 [2024-11-21 04:56:01.588287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.072 [2024-11-21 04:56:01.588359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.072 pt3 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.072 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.072 malloc4 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.073 [2024-11-21 04:56:01.638539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:45.073 [2024-11-21 04:56:01.638694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.073 [2024-11-21 04:56:01.638758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.073 [2024-11-21 04:56:01.638827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.073 [2024-11-21 04:56:01.642594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.073 [2024-11-21 04:56:01.642717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:45.073 pt4 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.073 [2024-11-21 04:56:01.650989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.073 [2024-11-21 04:56:01.653350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.073 [2024-11-21 04:56:01.653467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.073 [2024-11-21 04:56:01.653572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:45.073 [2024-11-21 04:56:01.653841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:45.073 [2024-11-21 04:56:01.653909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.073 [2024-11-21 04:56:01.654285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.073 [2024-11-21 04:56:01.654522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:45.073 [2024-11-21 04:56:01.654577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:45.073 [2024-11-21 04:56:01.654760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.073 "name": "raid_bdev1", 00:09:45.073 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:45.073 "strip_size_kb": 64, 00:09:45.073 "state": "online", 00:09:45.073 "raid_level": "raid0", 00:09:45.073 "superblock": true, 00:09:45.073 "num_base_bdevs": 4, 00:09:45.073 "num_base_bdevs_discovered": 4, 00:09:45.073 "num_base_bdevs_operational": 4, 00:09:45.073 "base_bdevs_list": [ 00:09:45.073 { 00:09:45.073 "name": "pt1", 00:09:45.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.073 "is_configured": true, 00:09:45.073 "data_offset": 2048, 00:09:45.073 "data_size": 63488 00:09:45.073 }, 00:09:45.073 { 00:09:45.073 "name": "pt2", 00:09:45.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.073 "is_configured": true, 00:09:45.073 "data_offset": 2048, 00:09:45.073 "data_size": 63488 00:09:45.073 }, 00:09:45.073 { 00:09:45.073 "name": "pt3", 00:09:45.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.073 "is_configured": true, 00:09:45.073 "data_offset": 2048, 00:09:45.073 "data_size": 63488 00:09:45.073 }, 00:09:45.073 { 00:09:45.073 "name": "pt4", 00:09:45.073 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.073 "is_configured": true, 00:09:45.073 "data_offset": 2048, 00:09:45.073 "data_size": 63488 00:09:45.073 } 00:09:45.073 ] 00:09:45.073 }' 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.073 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.643 [2024-11-21 04:56:02.082556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.643 "name": "raid_bdev1", 00:09:45.643 "aliases": [ 00:09:45.643 "4d75029a-3abf-4792-a5d6-585a0cc1da8a" 00:09:45.643 ], 00:09:45.643 "product_name": "Raid Volume", 00:09:45.643 "block_size": 512, 00:09:45.643 "num_blocks": 253952, 00:09:45.643 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:45.643 "assigned_rate_limits": { 00:09:45.643 "rw_ios_per_sec": 0, 00:09:45.643 "rw_mbytes_per_sec": 0, 00:09:45.643 "r_mbytes_per_sec": 0, 00:09:45.643 "w_mbytes_per_sec": 0 00:09:45.643 }, 00:09:45.643 "claimed": false, 00:09:45.643 "zoned": false, 00:09:45.643 "supported_io_types": { 00:09:45.643 "read": true, 00:09:45.643 "write": true, 00:09:45.643 "unmap": true, 00:09:45.643 "flush": true, 00:09:45.643 "reset": true, 00:09:45.643 "nvme_admin": false, 00:09:45.643 "nvme_io": false, 00:09:45.643 "nvme_io_md": false, 00:09:45.643 "write_zeroes": true, 00:09:45.643 "zcopy": false, 00:09:45.643 "get_zone_info": false, 00:09:45.643 "zone_management": false, 00:09:45.643 "zone_append": false, 00:09:45.643 "compare": false, 00:09:45.643 "compare_and_write": false, 00:09:45.643 "abort": false, 00:09:45.643 "seek_hole": false, 00:09:45.643 "seek_data": false, 00:09:45.643 "copy": false, 00:09:45.643 "nvme_iov_md": false 00:09:45.643 }, 00:09:45.643 "memory_domains": [ 00:09:45.643 { 00:09:45.643 "dma_device_id": "system", 00:09:45.643 "dma_device_type": 1 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.643 "dma_device_type": 2 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "system", 00:09:45.643 "dma_device_type": 1 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.643 "dma_device_type": 2 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "system", 00:09:45.643 "dma_device_type": 1 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.643 "dma_device_type": 2 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "system", 00:09:45.643 "dma_device_type": 1 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.643 "dma_device_type": 2 00:09:45.643 } 00:09:45.643 ], 00:09:45.643 "driver_specific": { 00:09:45.643 "raid": { 00:09:45.643 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:45.643 "strip_size_kb": 64, 00:09:45.643 "state": "online", 00:09:45.643 "raid_level": "raid0", 00:09:45.643 "superblock": true, 00:09:45.643 "num_base_bdevs": 4, 00:09:45.643 "num_base_bdevs_discovered": 4, 00:09:45.643 "num_base_bdevs_operational": 4, 00:09:45.643 "base_bdevs_list": [ 00:09:45.643 { 00:09:45.643 "name": "pt1", 00:09:45.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.643 "is_configured": true, 00:09:45.643 "data_offset": 2048, 00:09:45.643 "data_size": 63488 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "name": "pt2", 00:09:45.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.643 "is_configured": true, 00:09:45.643 "data_offset": 2048, 00:09:45.643 "data_size": 63488 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "name": "pt3", 00:09:45.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.643 "is_configured": true, 00:09:45.643 "data_offset": 2048, 00:09:45.643 "data_size": 63488 00:09:45.643 }, 00:09:45.643 { 00:09:45.643 "name": "pt4", 00:09:45.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.643 "is_configured": true, 00:09:45.643 "data_offset": 2048, 00:09:45.643 "data_size": 63488 00:09:45.643 } 00:09:45.643 ] 00:09:45.643 } 00:09:45.643 } 00:09:45.643 }' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.643 pt2 00:09:45.643 pt3 00:09:45.643 pt4' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 [2024-11-21 04:56:02.409869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d75029a-3abf-4792-a5d6-585a0cc1da8a 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d75029a-3abf-4792-a5d6-585a0cc1da8a ']' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 [2024-11-21 04:56:02.453507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.903 [2024-11-21 04:56:02.453574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.903 [2024-11-21 04:56:02.453672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.903 [2024-11-21 04:56:02.453763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.903 [2024-11-21 04:56:02.453823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 [2024-11-21 04:56:02.621266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.903 [2024-11-21 04:56:02.623284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.903 [2024-11-21 04:56:02.623397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:45.903 [2024-11-21 04:56:02.623456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:45.903 [2024-11-21 04:56:02.623510] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.903 [2024-11-21 04:56:02.623564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.903 [2024-11-21 04:56:02.623594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:45.904 [2024-11-21 04:56:02.623619] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:45.904 [2024-11-21 04:56:02.623642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.904 [2024-11-21 04:56:02.623654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:45.904 request: 00:09:45.904 { 00:09:45.904 "name": "raid_bdev1", 00:09:45.904 "raid_level": "raid0", 00:09:45.904 "base_bdevs": [ 00:09:45.904 "malloc1", 00:09:45.904 "malloc2", 00:09:45.904 "malloc3", 00:09:45.904 "malloc4" 00:09:45.904 ], 00:09:45.904 "strip_size_kb": 64, 00:09:45.904 "superblock": false, 00:09:45.904 "method": "bdev_raid_create", 00:09:45.904 "req_id": 1 00:09:45.904 } 00:09:45.904 Got JSON-RPC error response 00:09:45.904 response: 00:09:45.904 { 00:09:45.904 "code": -17, 00:09:45.904 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.904 } 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.904 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.164 [2024-11-21 04:56:02.681123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.164 [2024-11-21 04:56:02.681207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.164 [2024-11-21 04:56:02.681243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:46.164 [2024-11-21 04:56:02.681269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.164 [2024-11-21 04:56:02.683463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.164 [2024-11-21 04:56:02.683549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.164 [2024-11-21 04:56:02.683644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:46.164 [2024-11-21 04:56:02.683714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.164 pt1 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.164 "name": "raid_bdev1", 00:09:46.164 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:46.164 "strip_size_kb": 64, 00:09:46.164 "state": "configuring", 00:09:46.164 "raid_level": "raid0", 00:09:46.164 "superblock": true, 00:09:46.164 "num_base_bdevs": 4, 00:09:46.164 "num_base_bdevs_discovered": 1, 00:09:46.164 "num_base_bdevs_operational": 4, 00:09:46.164 "base_bdevs_list": [ 00:09:46.164 { 00:09:46.164 "name": "pt1", 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.164 "is_configured": true, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": null, 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.164 "is_configured": false, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": null, 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.164 "is_configured": false, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": null, 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.164 "is_configured": false, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 } 00:09:46.164 ] 00:09:46.164 }' 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.164 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 [2024-11-21 04:56:03.132380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.423 [2024-11-21 04:56:03.132443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.423 [2024-11-21 04:56:03.132466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:46.423 [2024-11-21 04:56:03.132476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.423 [2024-11-21 04:56:03.132877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.423 [2024-11-21 04:56:03.132894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.423 [2024-11-21 04:56:03.132971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.423 [2024-11-21 04:56:03.132992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.423 pt2 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 [2024-11-21 04:56:03.144348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.423 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.424 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.424 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.424 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.424 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.682 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.682 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.682 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.682 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.683 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.683 "name": "raid_bdev1", 00:09:46.683 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:46.683 "strip_size_kb": 64, 00:09:46.683 "state": "configuring", 00:09:46.683 "raid_level": "raid0", 00:09:46.683 "superblock": true, 00:09:46.683 "num_base_bdevs": 4, 00:09:46.683 "num_base_bdevs_discovered": 1, 00:09:46.683 "num_base_bdevs_operational": 4, 00:09:46.683 "base_bdevs_list": [ 00:09:46.683 { 00:09:46.683 "name": "pt1", 00:09:46.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.683 "is_configured": true, 00:09:46.683 "data_offset": 2048, 00:09:46.683 "data_size": 63488 00:09:46.683 }, 00:09:46.683 { 00:09:46.683 "name": null, 00:09:46.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.683 "is_configured": false, 00:09:46.683 "data_offset": 0, 00:09:46.683 "data_size": 63488 00:09:46.683 }, 00:09:46.683 { 00:09:46.683 "name": null, 00:09:46.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.683 "is_configured": false, 00:09:46.683 "data_offset": 2048, 00:09:46.683 "data_size": 63488 00:09:46.683 }, 00:09:46.683 { 00:09:46.683 "name": null, 00:09:46.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.683 "is_configured": false, 00:09:46.683 "data_offset": 2048, 00:09:46.683 "data_size": 63488 00:09:46.683 } 00:09:46.683 ] 00:09:46.683 }' 00:09:46.683 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.683 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.942 [2024-11-21 04:56:03.563624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.942 [2024-11-21 04:56:03.563743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.942 [2024-11-21 04:56:03.563778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:46.942 [2024-11-21 04:56:03.563808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.942 [2024-11-21 04:56:03.564259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.942 [2024-11-21 04:56:03.564325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.942 [2024-11-21 04:56:03.564449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.942 [2024-11-21 04:56:03.564509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.942 pt2 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.942 [2024-11-21 04:56:03.575594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.942 [2024-11-21 04:56:03.575640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.942 [2024-11-21 04:56:03.575654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:46.942 [2024-11-21 04:56:03.575664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.942 [2024-11-21 04:56:03.575966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.942 [2024-11-21 04:56:03.575984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.942 [2024-11-21 04:56:03.576035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:46.942 [2024-11-21 04:56:03.576053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.942 pt3 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.942 [2024-11-21 04:56:03.587562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:46.942 [2024-11-21 04:56:03.587614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.942 [2024-11-21 04:56:03.587629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:46.942 [2024-11-21 04:56:03.587639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.942 [2024-11-21 04:56:03.587947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.942 [2024-11-21 04:56:03.587964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:46.942 [2024-11-21 04:56:03.588018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:46.942 [2024-11-21 04:56:03.588037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:46.942 [2024-11-21 04:56:03.588158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:46.942 [2024-11-21 04:56:03.588172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.942 [2024-11-21 04:56:03.588398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.942 [2024-11-21 04:56:03.588526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:46.942 [2024-11-21 04:56:03.588536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:46.942 [2024-11-21 04:56:03.588648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.942 pt4 00:09:46.942 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.943 "name": "raid_bdev1", 00:09:46.943 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:46.943 "strip_size_kb": 64, 00:09:46.943 "state": "online", 00:09:46.943 "raid_level": "raid0", 00:09:46.943 "superblock": true, 00:09:46.943 "num_base_bdevs": 4, 00:09:46.943 "num_base_bdevs_discovered": 4, 00:09:46.943 "num_base_bdevs_operational": 4, 00:09:46.943 "base_bdevs_list": [ 00:09:46.943 { 00:09:46.943 "name": "pt1", 00:09:46.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.943 "is_configured": true, 00:09:46.943 "data_offset": 2048, 00:09:46.943 "data_size": 63488 00:09:46.943 }, 00:09:46.943 { 00:09:46.943 "name": "pt2", 00:09:46.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.943 "is_configured": true, 00:09:46.943 "data_offset": 2048, 00:09:46.943 "data_size": 63488 00:09:46.943 }, 00:09:46.943 { 00:09:46.943 "name": "pt3", 00:09:46.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.943 "is_configured": true, 00:09:46.943 "data_offset": 2048, 00:09:46.943 "data_size": 63488 00:09:46.943 }, 00:09:46.943 { 00:09:46.943 "name": "pt4", 00:09:46.943 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.943 "is_configured": true, 00:09:46.943 "data_offset": 2048, 00:09:46.943 "data_size": 63488 00:09:46.943 } 00:09:46.943 ] 00:09:46.943 }' 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.943 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.510 [2024-11-21 04:56:04.027225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.510 "name": "raid_bdev1", 00:09:47.510 "aliases": [ 00:09:47.510 "4d75029a-3abf-4792-a5d6-585a0cc1da8a" 00:09:47.510 ], 00:09:47.510 "product_name": "Raid Volume", 00:09:47.510 "block_size": 512, 00:09:47.510 "num_blocks": 253952, 00:09:47.510 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:47.510 "assigned_rate_limits": { 00:09:47.510 "rw_ios_per_sec": 0, 00:09:47.510 "rw_mbytes_per_sec": 0, 00:09:47.510 "r_mbytes_per_sec": 0, 00:09:47.510 "w_mbytes_per_sec": 0 00:09:47.510 }, 00:09:47.510 "claimed": false, 00:09:47.510 "zoned": false, 00:09:47.510 "supported_io_types": { 00:09:47.510 "read": true, 00:09:47.510 "write": true, 00:09:47.510 "unmap": true, 00:09:47.510 "flush": true, 00:09:47.510 "reset": true, 00:09:47.510 "nvme_admin": false, 00:09:47.510 "nvme_io": false, 00:09:47.510 "nvme_io_md": false, 00:09:47.510 "write_zeroes": true, 00:09:47.510 "zcopy": false, 00:09:47.510 "get_zone_info": false, 00:09:47.510 "zone_management": false, 00:09:47.510 "zone_append": false, 00:09:47.510 "compare": false, 00:09:47.510 "compare_and_write": false, 00:09:47.510 "abort": false, 00:09:47.510 "seek_hole": false, 00:09:47.510 "seek_data": false, 00:09:47.510 "copy": false, 00:09:47.510 "nvme_iov_md": false 00:09:47.510 }, 00:09:47.510 "memory_domains": [ 00:09:47.510 { 00:09:47.510 "dma_device_id": "system", 00:09:47.510 "dma_device_type": 1 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.510 "dma_device_type": 2 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "system", 00:09:47.510 "dma_device_type": 1 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.510 "dma_device_type": 2 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "system", 00:09:47.510 "dma_device_type": 1 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.510 "dma_device_type": 2 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "system", 00:09:47.510 "dma_device_type": 1 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.510 "dma_device_type": 2 00:09:47.510 } 00:09:47.510 ], 00:09:47.510 "driver_specific": { 00:09:47.510 "raid": { 00:09:47.510 "uuid": "4d75029a-3abf-4792-a5d6-585a0cc1da8a", 00:09:47.510 "strip_size_kb": 64, 00:09:47.510 "state": "online", 00:09:47.510 "raid_level": "raid0", 00:09:47.510 "superblock": true, 00:09:47.510 "num_base_bdevs": 4, 00:09:47.510 "num_base_bdevs_discovered": 4, 00:09:47.510 "num_base_bdevs_operational": 4, 00:09:47.510 "base_bdevs_list": [ 00:09:47.510 { 00:09:47.510 "name": "pt1", 00:09:47.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.510 "is_configured": true, 00:09:47.510 "data_offset": 2048, 00:09:47.510 "data_size": 63488 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "name": "pt2", 00:09:47.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.510 "is_configured": true, 00:09:47.510 "data_offset": 2048, 00:09:47.510 "data_size": 63488 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "name": "pt3", 00:09:47.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.510 "is_configured": true, 00:09:47.510 "data_offset": 2048, 00:09:47.510 "data_size": 63488 00:09:47.510 }, 00:09:47.510 { 00:09:47.510 "name": "pt4", 00:09:47.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.510 "is_configured": true, 00:09:47.510 "data_offset": 2048, 00:09:47.510 "data_size": 63488 00:09:47.510 } 00:09:47.510 ] 00:09:47.510 } 00:09:47.510 } 00:09:47.510 }' 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.510 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.510 pt2 00:09:47.510 pt3 00:09:47.511 pt4' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.511 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.771 [2024-11-21 04:56:04.362553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d75029a-3abf-4792-a5d6-585a0cc1da8a '!=' 4d75029a-3abf-4792-a5d6-585a0cc1da8a ']' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81816 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81816 ']' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81816 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81816 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81816' 00:09:47.771 killing process with pid 81816 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81816 00:09:47.771 [2024-11-21 04:56:04.435777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.771 [2024-11-21 04:56:04.435877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.771 [2024-11-21 04:56:04.435951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.771 [2024-11-21 04:56:04.435963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:47.771 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81816 00:09:47.771 [2024-11-21 04:56:04.479075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.031 ************************************ 00:09:48.031 END TEST raid_superblock_test 00:09:48.031 ************************************ 00:09:48.031 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:48.031 00:09:48.031 real 0m4.110s 00:09:48.031 user 0m6.448s 00:09:48.031 sys 0m0.927s 00:09:48.031 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.031 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.031 04:56:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:48.031 04:56:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.031 04:56:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.031 04:56:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.291 ************************************ 00:09:48.291 START TEST raid_read_error_test 00:09:48.291 ************************************ 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3QGzTshZIe 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82064 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82064 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 82064 ']' 00:09:48.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.291 04:56:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.291 [2024-11-21 04:56:04.870688] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:48.291 [2024-11-21 04:56:04.870815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82064 ] 00:09:48.551 [2024-11-21 04:56:05.040714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.551 [2024-11-21 04:56:05.067892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.551 [2024-11-21 04:56:05.109999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.551 [2024-11-21 04:56:05.110038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 BaseBdev1_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 true 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 [2024-11-21 04:56:05.732498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.121 [2024-11-21 04:56:05.732615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.121 [2024-11-21 04:56:05.732656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.121 [2024-11-21 04:56:05.732668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.121 [2024-11-21 04:56:05.734845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.121 [2024-11-21 04:56:05.734883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.121 BaseBdev1 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 BaseBdev2_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 true 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 [2024-11-21 04:56:05.772851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.121 [2024-11-21 04:56:05.772893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.121 [2024-11-21 04:56:05.772910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.121 [2024-11-21 04:56:05.772918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.121 [2024-11-21 04:56:05.774904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.121 [2024-11-21 04:56:05.775007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.121 BaseBdev2 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 BaseBdev3_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 true 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.121 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 [2024-11-21 04:56:05.813249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.121 [2024-11-21 04:56:05.813342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.122 [2024-11-21 04:56:05.813364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.122 [2024-11-21 04:56:05.813372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.122 [2024-11-21 04:56:05.815419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.122 [2024-11-21 04:56:05.815452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.122 BaseBdev3 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.122 BaseBdev4_malloc 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.122 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 true 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 [2024-11-21 04:56:05.863341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:49.381 [2024-11-21 04:56:05.863445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.381 [2024-11-21 04:56:05.863474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.381 [2024-11-21 04:56:05.863484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.381 [2024-11-21 04:56:05.865695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.381 [2024-11-21 04:56:05.865737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:49.381 BaseBdev4 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.381 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 [2024-11-21 04:56:05.875397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.382 [2024-11-21 04:56:05.877169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.382 [2024-11-21 04:56:05.877267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.382 [2024-11-21 04:56:05.877320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.382 [2024-11-21 04:56:05.877515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:49.382 [2024-11-21 04:56:05.877528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:49.382 [2024-11-21 04:56:05.877783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:49.382 [2024-11-21 04:56:05.877910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:49.382 [2024-11-21 04:56:05.877928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:49.382 [2024-11-21 04:56:05.878058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.382 "name": "raid_bdev1", 00:09:49.382 "uuid": "6438cab3-5a5b-4887-8b74-3130936a82c0", 00:09:49.382 "strip_size_kb": 64, 00:09:49.382 "state": "online", 00:09:49.382 "raid_level": "raid0", 00:09:49.382 "superblock": true, 00:09:49.382 "num_base_bdevs": 4, 00:09:49.382 "num_base_bdevs_discovered": 4, 00:09:49.382 "num_base_bdevs_operational": 4, 00:09:49.382 "base_bdevs_list": [ 00:09:49.382 { 00:09:49.382 "name": "BaseBdev1", 00:09:49.382 "uuid": "58181ab0-a221-55f2-97fd-8c3d7879d5ae", 00:09:49.382 "is_configured": true, 00:09:49.382 "data_offset": 2048, 00:09:49.382 "data_size": 63488 00:09:49.382 }, 00:09:49.382 { 00:09:49.382 "name": "BaseBdev2", 00:09:49.382 "uuid": "c1cbaf08-eb24-53ae-acb0-b31497c88ac3", 00:09:49.382 "is_configured": true, 00:09:49.382 "data_offset": 2048, 00:09:49.382 "data_size": 63488 00:09:49.382 }, 00:09:49.382 { 00:09:49.382 "name": "BaseBdev3", 00:09:49.382 "uuid": "01d5adb5-a234-558b-9227-bb7b45751cc2", 00:09:49.382 "is_configured": true, 00:09:49.382 "data_offset": 2048, 00:09:49.382 "data_size": 63488 00:09:49.382 }, 00:09:49.382 { 00:09:49.382 "name": "BaseBdev4", 00:09:49.382 "uuid": "1eb3704a-8164-55bf-b452-7f5452ace49e", 00:09:49.382 "is_configured": true, 00:09:49.382 "data_offset": 2048, 00:09:49.382 "data_size": 63488 00:09:49.382 } 00:09:49.382 ] 00:09:49.382 }' 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.382 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.642 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.642 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.642 [2024-11-21 04:56:06.343019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.584 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.843 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.843 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.843 "name": "raid_bdev1", 00:09:50.843 "uuid": "6438cab3-5a5b-4887-8b74-3130936a82c0", 00:09:50.843 "strip_size_kb": 64, 00:09:50.843 "state": "online", 00:09:50.843 "raid_level": "raid0", 00:09:50.843 "superblock": true, 00:09:50.843 "num_base_bdevs": 4, 00:09:50.843 "num_base_bdevs_discovered": 4, 00:09:50.843 "num_base_bdevs_operational": 4, 00:09:50.843 "base_bdevs_list": [ 00:09:50.843 { 00:09:50.843 "name": "BaseBdev1", 00:09:50.843 "uuid": "58181ab0-a221-55f2-97fd-8c3d7879d5ae", 00:09:50.843 "is_configured": true, 00:09:50.843 "data_offset": 2048, 00:09:50.843 "data_size": 63488 00:09:50.843 }, 00:09:50.843 { 00:09:50.843 "name": "BaseBdev2", 00:09:50.843 "uuid": "c1cbaf08-eb24-53ae-acb0-b31497c88ac3", 00:09:50.843 "is_configured": true, 00:09:50.843 "data_offset": 2048, 00:09:50.843 "data_size": 63488 00:09:50.843 }, 00:09:50.843 { 00:09:50.843 "name": "BaseBdev3", 00:09:50.843 "uuid": "01d5adb5-a234-558b-9227-bb7b45751cc2", 00:09:50.843 "is_configured": true, 00:09:50.843 "data_offset": 2048, 00:09:50.843 "data_size": 63488 00:09:50.843 }, 00:09:50.843 { 00:09:50.843 "name": "BaseBdev4", 00:09:50.843 "uuid": "1eb3704a-8164-55bf-b452-7f5452ace49e", 00:09:50.843 "is_configured": true, 00:09:50.843 "data_offset": 2048, 00:09:50.843 "data_size": 63488 00:09:50.843 } 00:09:50.843 ] 00:09:50.843 }' 00:09:50.843 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.843 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.103 [2024-11-21 04:56:07.698604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.103 [2024-11-21 04:56:07.698636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.103 [2024-11-21 04:56:07.701497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.103 [2024-11-21 04:56:07.701589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.103 [2024-11-21 04:56:07.701658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.103 [2024-11-21 04:56:07.701740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:51.103 { 00:09:51.103 "results": [ 00:09:51.103 { 00:09:51.103 "job": "raid_bdev1", 00:09:51.103 "core_mask": "0x1", 00:09:51.103 "workload": "randrw", 00:09:51.103 "percentage": 50, 00:09:51.103 "status": "finished", 00:09:51.103 "queue_depth": 1, 00:09:51.103 "io_size": 131072, 00:09:51.103 "runtime": 1.356352, 00:09:51.103 "iops": 16868.040154768085, 00:09:51.103 "mibps": 2108.5050193460106, 00:09:51.103 "io_failed": 1, 00:09:51.103 "io_timeout": 0, 00:09:51.103 "avg_latency_us": 82.21415580053136, 00:09:51.103 "min_latency_us": 24.705676855895195, 00:09:51.103 "max_latency_us": 1402.2986899563318 00:09:51.103 } 00:09:51.103 ], 00:09:51.103 "core_count": 1 00:09:51.103 } 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82064 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 82064 ']' 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 82064 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82064 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82064' 00:09:51.103 killing process with pid 82064 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 82064 00:09:51.103 [2024-11-21 04:56:07.752632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.103 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 82064 00:09:51.103 [2024-11-21 04:56:07.787189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.363 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3QGzTshZIe 00:09:51.363 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:51.364 ************************************ 00:09:51.364 END TEST raid_read_error_test 00:09:51.364 ************************************ 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:51.364 00:09:51.364 real 0m3.228s 00:09:51.364 user 0m4.012s 00:09:51.364 sys 0m0.515s 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.364 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.364 04:56:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:51.364 04:56:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.364 04:56:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.364 04:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.364 ************************************ 00:09:51.364 START TEST raid_write_error_test 00:09:51.364 ************************************ 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YpvlppIJiz 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82193 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82193 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 82193 ']' 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.364 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 [2024-11-21 04:56:08.167009] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:51.623 [2024-11-21 04:56:08.167143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82193 ] 00:09:51.623 [2024-11-21 04:56:08.338106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.883 [2024-11-21 04:56:08.363756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.883 [2024-11-21 04:56:08.405914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.883 [2024-11-21 04:56:08.405952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 BaseBdev1_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 true 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-21 04:56:09.019690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.453 [2024-11-21 04:56:09.019791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.453 [2024-11-21 04:56:09.019814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.453 [2024-11-21 04:56:09.019830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.453 [2024-11-21 04:56:09.021902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.453 [2024-11-21 04:56:09.021941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.453 BaseBdev1 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 BaseBdev2_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 true 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-21 04:56:09.056057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.453 [2024-11-21 04:56:09.056113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.453 [2024-11-21 04:56:09.056131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.453 [2024-11-21 04:56:09.056139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.453 [2024-11-21 04:56:09.058183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.453 [2024-11-21 04:56:09.058217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.453 BaseBdev2 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 BaseBdev3_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 true 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-21 04:56:09.096672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.453 [2024-11-21 04:56:09.096716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.453 [2024-11-21 04:56:09.096735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.453 [2024-11-21 04:56:09.096744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.453 [2024-11-21 04:56:09.098925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.453 [2024-11-21 04:56:09.098960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.453 BaseBdev3 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 BaseBdev4_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 true 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-21 04:56:09.144104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:52.453 [2024-11-21 04:56:09.144147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.453 [2024-11-21 04:56:09.144169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.453 [2024-11-21 04:56:09.144178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.453 [2024-11-21 04:56:09.146176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.453 [2024-11-21 04:56:09.146211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:52.453 BaseBdev4 00:09:52.453 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.454 [2024-11-21 04:56:09.156128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.454 [2024-11-21 04:56:09.157922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.454 [2024-11-21 04:56:09.158002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.454 [2024-11-21 04:56:09.158054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.454 [2024-11-21 04:56:09.158253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:52.454 [2024-11-21 04:56:09.158266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.454 [2024-11-21 04:56:09.158525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:52.454 [2024-11-21 04:56:09.158661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:52.454 [2024-11-21 04:56:09.158674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:52.454 [2024-11-21 04:56:09.158812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.454 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.713 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.713 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.713 "name": "raid_bdev1", 00:09:52.713 "uuid": "f60be01b-8413-4cb7-bdcf-0734e79590c3", 00:09:52.713 "strip_size_kb": 64, 00:09:52.713 "state": "online", 00:09:52.713 "raid_level": "raid0", 00:09:52.713 "superblock": true, 00:09:52.713 "num_base_bdevs": 4, 00:09:52.713 "num_base_bdevs_discovered": 4, 00:09:52.713 "num_base_bdevs_operational": 4, 00:09:52.713 "base_bdevs_list": [ 00:09:52.713 { 00:09:52.713 "name": "BaseBdev1", 00:09:52.713 "uuid": "89d7ff6a-3d45-5806-8929-c269f3bc42b0", 00:09:52.713 "is_configured": true, 00:09:52.713 "data_offset": 2048, 00:09:52.713 "data_size": 63488 00:09:52.713 }, 00:09:52.713 { 00:09:52.713 "name": "BaseBdev2", 00:09:52.713 "uuid": "aa660985-8e83-54c0-a379-b6d05de57be7", 00:09:52.713 "is_configured": true, 00:09:52.713 "data_offset": 2048, 00:09:52.713 "data_size": 63488 00:09:52.713 }, 00:09:52.713 { 00:09:52.713 "name": "BaseBdev3", 00:09:52.713 "uuid": "b6af7af5-1a83-5d13-9763-997f2430a4bb", 00:09:52.713 "is_configured": true, 00:09:52.713 "data_offset": 2048, 00:09:52.713 "data_size": 63488 00:09:52.713 }, 00:09:52.713 { 00:09:52.713 "name": "BaseBdev4", 00:09:52.713 "uuid": "8a0a8d1a-f6cc-5af9-89b8-29fa1e2496b9", 00:09:52.713 "is_configured": true, 00:09:52.713 "data_offset": 2048, 00:09:52.713 "data_size": 63488 00:09:52.713 } 00:09:52.713 ] 00:09:52.713 }' 00:09:52.713 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.713 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.972 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.972 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.972 [2024-11-21 04:56:09.695618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.912 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.173 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.173 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.173 "name": "raid_bdev1", 00:09:54.173 "uuid": "f60be01b-8413-4cb7-bdcf-0734e79590c3", 00:09:54.173 "strip_size_kb": 64, 00:09:54.173 "state": "online", 00:09:54.173 "raid_level": "raid0", 00:09:54.173 "superblock": true, 00:09:54.173 "num_base_bdevs": 4, 00:09:54.173 "num_base_bdevs_discovered": 4, 00:09:54.173 "num_base_bdevs_operational": 4, 00:09:54.173 "base_bdevs_list": [ 00:09:54.173 { 00:09:54.173 "name": "BaseBdev1", 00:09:54.173 "uuid": "89d7ff6a-3d45-5806-8929-c269f3bc42b0", 00:09:54.173 "is_configured": true, 00:09:54.173 "data_offset": 2048, 00:09:54.173 "data_size": 63488 00:09:54.173 }, 00:09:54.173 { 00:09:54.173 "name": "BaseBdev2", 00:09:54.173 "uuid": "aa660985-8e83-54c0-a379-b6d05de57be7", 00:09:54.173 "is_configured": true, 00:09:54.173 "data_offset": 2048, 00:09:54.173 "data_size": 63488 00:09:54.173 }, 00:09:54.173 { 00:09:54.173 "name": "BaseBdev3", 00:09:54.173 "uuid": "b6af7af5-1a83-5d13-9763-997f2430a4bb", 00:09:54.173 "is_configured": true, 00:09:54.173 "data_offset": 2048, 00:09:54.173 "data_size": 63488 00:09:54.173 }, 00:09:54.173 { 00:09:54.173 "name": "BaseBdev4", 00:09:54.173 "uuid": "8a0a8d1a-f6cc-5af9-89b8-29fa1e2496b9", 00:09:54.173 "is_configured": true, 00:09:54.173 "data_offset": 2048, 00:09:54.173 "data_size": 63488 00:09:54.173 } 00:09:54.173 ] 00:09:54.173 }' 00:09:54.173 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.173 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.433 [2024-11-21 04:56:11.047474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.433 [2024-11-21 04:56:11.047566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.433 [2024-11-21 04:56:11.050138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.433 [2024-11-21 04:56:11.050226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.433 [2024-11-21 04:56:11.050293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.433 [2024-11-21 04:56:11.050346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:54.433 { 00:09:54.433 "results": [ 00:09:54.433 { 00:09:54.433 "job": "raid_bdev1", 00:09:54.433 "core_mask": "0x1", 00:09:54.433 "workload": "randrw", 00:09:54.433 "percentage": 50, 00:09:54.433 "status": "finished", 00:09:54.433 "queue_depth": 1, 00:09:54.433 "io_size": 131072, 00:09:54.433 "runtime": 1.352722, 00:09:54.433 "iops": 16987.96944235401, 00:09:54.433 "mibps": 2123.496180294251, 00:09:54.433 "io_failed": 1, 00:09:54.433 "io_timeout": 0, 00:09:54.433 "avg_latency_us": 81.68511515778461, 00:09:54.433 "min_latency_us": 24.482096069868994, 00:09:54.433 "max_latency_us": 1459.5353711790392 00:09:54.433 } 00:09:54.433 ], 00:09:54.433 "core_count": 1 00:09:54.433 } 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82193 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 82193 ']' 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 82193 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82193 00:09:54.433 killing process with pid 82193 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82193' 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 82193 00:09:54.433 [2024-11-21 04:56:11.090831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.433 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 82193 00:09:54.433 [2024-11-21 04:56:11.124921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YpvlppIJiz 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:54.693 00:09:54.693 real 0m3.262s 00:09:54.693 user 0m4.114s 00:09:54.693 sys 0m0.529s 00:09:54.693 ************************************ 00:09:54.693 END TEST raid_write_error_test 00:09:54.693 ************************************ 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.693 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.693 04:56:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:54.693 04:56:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:54.693 04:56:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.693 04:56:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.693 04:56:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.693 ************************************ 00:09:54.693 START TEST raid_state_function_test 00:09:54.693 ************************************ 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:54.693 Process raid pid: 82320 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82320 00:09:54.693 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82320' 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82320 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82320 ']' 00:09:54.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.694 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.953 [2024-11-21 04:56:11.501075] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:54.953 [2024-11-21 04:56:11.501203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.953 [2024-11-21 04:56:11.672696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.212 [2024-11-21 04:56:11.698363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.212 [2024-11-21 04:56:11.740448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.212 [2024-11-21 04:56:11.740486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.782 [2024-11-21 04:56:12.333738] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.782 [2024-11-21 04:56:12.333811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.782 [2024-11-21 04:56:12.333823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.782 [2024-11-21 04:56:12.333834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.782 [2024-11-21 04:56:12.333841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.782 [2024-11-21 04:56:12.333854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.782 [2024-11-21 04:56:12.333861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.782 [2024-11-21 04:56:12.333871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.782 "name": "Existed_Raid", 00:09:55.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.782 "strip_size_kb": 64, 00:09:55.782 "state": "configuring", 00:09:55.782 "raid_level": "concat", 00:09:55.782 "superblock": false, 00:09:55.782 "num_base_bdevs": 4, 00:09:55.782 "num_base_bdevs_discovered": 0, 00:09:55.782 "num_base_bdevs_operational": 4, 00:09:55.782 "base_bdevs_list": [ 00:09:55.782 { 00:09:55.782 "name": "BaseBdev1", 00:09:55.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.782 "is_configured": false, 00:09:55.782 "data_offset": 0, 00:09:55.782 "data_size": 0 00:09:55.782 }, 00:09:55.782 { 00:09:55.782 "name": "BaseBdev2", 00:09:55.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.782 "is_configured": false, 00:09:55.782 "data_offset": 0, 00:09:55.782 "data_size": 0 00:09:55.782 }, 00:09:55.782 { 00:09:55.782 "name": "BaseBdev3", 00:09:55.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.782 "is_configured": false, 00:09:55.782 "data_offset": 0, 00:09:55.782 "data_size": 0 00:09:55.782 }, 00:09:55.782 { 00:09:55.782 "name": "BaseBdev4", 00:09:55.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.782 "is_configured": false, 00:09:55.782 "data_offset": 0, 00:09:55.782 "data_size": 0 00:09:55.782 } 00:09:55.782 ] 00:09:55.782 }' 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.782 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 [2024-11-21 04:56:12.788830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.351 [2024-11-21 04:56:12.788915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 [2024-11-21 04:56:12.800819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.351 [2024-11-21 04:56:12.800896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.351 [2024-11-21 04:56:12.800923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.351 [2024-11-21 04:56:12.800946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.351 [2024-11-21 04:56:12.800963] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.351 [2024-11-21 04:56:12.800985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.351 [2024-11-21 04:56:12.801002] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.351 [2024-11-21 04:56:12.801045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 [2024-11-21 04:56:12.821486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.351 BaseBdev1 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 [ 00:09:56.351 { 00:09:56.351 "name": "BaseBdev1", 00:09:56.351 "aliases": [ 00:09:56.351 "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a" 00:09:56.351 ], 00:09:56.351 "product_name": "Malloc disk", 00:09:56.351 "block_size": 512, 00:09:56.351 "num_blocks": 65536, 00:09:56.351 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:56.351 "assigned_rate_limits": { 00:09:56.351 "rw_ios_per_sec": 0, 00:09:56.351 "rw_mbytes_per_sec": 0, 00:09:56.351 "r_mbytes_per_sec": 0, 00:09:56.351 "w_mbytes_per_sec": 0 00:09:56.351 }, 00:09:56.351 "claimed": true, 00:09:56.351 "claim_type": "exclusive_write", 00:09:56.351 "zoned": false, 00:09:56.351 "supported_io_types": { 00:09:56.351 "read": true, 00:09:56.351 "write": true, 00:09:56.351 "unmap": true, 00:09:56.351 "flush": true, 00:09:56.351 "reset": true, 00:09:56.351 "nvme_admin": false, 00:09:56.351 "nvme_io": false, 00:09:56.351 "nvme_io_md": false, 00:09:56.351 "write_zeroes": true, 00:09:56.351 "zcopy": true, 00:09:56.351 "get_zone_info": false, 00:09:56.351 "zone_management": false, 00:09:56.351 "zone_append": false, 00:09:56.351 "compare": false, 00:09:56.351 "compare_and_write": false, 00:09:56.351 "abort": true, 00:09:56.351 "seek_hole": false, 00:09:56.351 "seek_data": false, 00:09:56.351 "copy": true, 00:09:56.351 "nvme_iov_md": false 00:09:56.351 }, 00:09:56.351 "memory_domains": [ 00:09:56.351 { 00:09:56.351 "dma_device_id": "system", 00:09:56.351 "dma_device_type": 1 00:09:56.351 }, 00:09:56.351 { 00:09:56.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.351 "dma_device_type": 2 00:09:56.351 } 00:09:56.351 ], 00:09:56.351 "driver_specific": {} 00:09:56.351 } 00:09:56.351 ] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.351 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.351 "name": "Existed_Raid", 00:09:56.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.351 "strip_size_kb": 64, 00:09:56.351 "state": "configuring", 00:09:56.351 "raid_level": "concat", 00:09:56.351 "superblock": false, 00:09:56.351 "num_base_bdevs": 4, 00:09:56.351 "num_base_bdevs_discovered": 1, 00:09:56.351 "num_base_bdevs_operational": 4, 00:09:56.351 "base_bdevs_list": [ 00:09:56.351 { 00:09:56.351 "name": "BaseBdev1", 00:09:56.351 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:56.351 "is_configured": true, 00:09:56.351 "data_offset": 0, 00:09:56.351 "data_size": 65536 00:09:56.351 }, 00:09:56.351 { 00:09:56.352 "name": "BaseBdev2", 00:09:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.352 "is_configured": false, 00:09:56.352 "data_offset": 0, 00:09:56.352 "data_size": 0 00:09:56.352 }, 00:09:56.352 { 00:09:56.352 "name": "BaseBdev3", 00:09:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.352 "is_configured": false, 00:09:56.352 "data_offset": 0, 00:09:56.352 "data_size": 0 00:09:56.352 }, 00:09:56.352 { 00:09:56.352 "name": "BaseBdev4", 00:09:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.352 "is_configured": false, 00:09:56.352 "data_offset": 0, 00:09:56.352 "data_size": 0 00:09:56.352 } 00:09:56.352 ] 00:09:56.352 }' 00:09:56.352 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.352 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.612 [2024-11-21 04:56:13.280708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.612 [2024-11-21 04:56:13.280805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.612 [2024-11-21 04:56:13.292739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.612 [2024-11-21 04:56:13.294611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.612 [2024-11-21 04:56:13.294682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.612 [2024-11-21 04:56:13.294744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.612 [2024-11-21 04:56:13.294783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.612 [2024-11-21 04:56:13.294816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.612 [2024-11-21 04:56:13.294851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.612 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.872 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.872 "name": "Existed_Raid", 00:09:56.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.872 "strip_size_kb": 64, 00:09:56.872 "state": "configuring", 00:09:56.872 "raid_level": "concat", 00:09:56.872 "superblock": false, 00:09:56.872 "num_base_bdevs": 4, 00:09:56.872 "num_base_bdevs_discovered": 1, 00:09:56.872 "num_base_bdevs_operational": 4, 00:09:56.872 "base_bdevs_list": [ 00:09:56.872 { 00:09:56.872 "name": "BaseBdev1", 00:09:56.872 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:56.872 "is_configured": true, 00:09:56.872 "data_offset": 0, 00:09:56.872 "data_size": 65536 00:09:56.872 }, 00:09:56.872 { 00:09:56.872 "name": "BaseBdev2", 00:09:56.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.872 "is_configured": false, 00:09:56.872 "data_offset": 0, 00:09:56.872 "data_size": 0 00:09:56.872 }, 00:09:56.872 { 00:09:56.872 "name": "BaseBdev3", 00:09:56.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.872 "is_configured": false, 00:09:56.872 "data_offset": 0, 00:09:56.872 "data_size": 0 00:09:56.872 }, 00:09:56.872 { 00:09:56.872 "name": "BaseBdev4", 00:09:56.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.872 "is_configured": false, 00:09:56.872 "data_offset": 0, 00:09:56.872 "data_size": 0 00:09:56.872 } 00:09:56.872 ] 00:09:56.872 }' 00:09:56.872 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.872 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 [2024-11-21 04:56:13.723000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.132 BaseBdev2 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 [ 00:09:57.132 { 00:09:57.132 "name": "BaseBdev2", 00:09:57.132 "aliases": [ 00:09:57.132 "4095de88-c36a-4c6f-91a9-133365408b14" 00:09:57.132 ], 00:09:57.132 "product_name": "Malloc disk", 00:09:57.132 "block_size": 512, 00:09:57.132 "num_blocks": 65536, 00:09:57.132 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:57.132 "assigned_rate_limits": { 00:09:57.132 "rw_ios_per_sec": 0, 00:09:57.132 "rw_mbytes_per_sec": 0, 00:09:57.132 "r_mbytes_per_sec": 0, 00:09:57.132 "w_mbytes_per_sec": 0 00:09:57.132 }, 00:09:57.132 "claimed": true, 00:09:57.132 "claim_type": "exclusive_write", 00:09:57.132 "zoned": false, 00:09:57.132 "supported_io_types": { 00:09:57.132 "read": true, 00:09:57.132 "write": true, 00:09:57.132 "unmap": true, 00:09:57.132 "flush": true, 00:09:57.132 "reset": true, 00:09:57.132 "nvme_admin": false, 00:09:57.132 "nvme_io": false, 00:09:57.132 "nvme_io_md": false, 00:09:57.132 "write_zeroes": true, 00:09:57.132 "zcopy": true, 00:09:57.132 "get_zone_info": false, 00:09:57.132 "zone_management": false, 00:09:57.132 "zone_append": false, 00:09:57.132 "compare": false, 00:09:57.132 "compare_and_write": false, 00:09:57.132 "abort": true, 00:09:57.132 "seek_hole": false, 00:09:57.132 "seek_data": false, 00:09:57.132 "copy": true, 00:09:57.132 "nvme_iov_md": false 00:09:57.132 }, 00:09:57.132 "memory_domains": [ 00:09:57.132 { 00:09:57.132 "dma_device_id": "system", 00:09:57.132 "dma_device_type": 1 00:09:57.132 }, 00:09:57.132 { 00:09:57.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.132 "dma_device_type": 2 00:09:57.132 } 00:09:57.132 ], 00:09:57.132 "driver_specific": {} 00:09:57.132 } 00:09:57.132 ] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.132 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.132 "name": "Existed_Raid", 00:09:57.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.132 "strip_size_kb": 64, 00:09:57.132 "state": "configuring", 00:09:57.132 "raid_level": "concat", 00:09:57.132 "superblock": false, 00:09:57.132 "num_base_bdevs": 4, 00:09:57.132 "num_base_bdevs_discovered": 2, 00:09:57.132 "num_base_bdevs_operational": 4, 00:09:57.132 "base_bdevs_list": [ 00:09:57.132 { 00:09:57.132 "name": "BaseBdev1", 00:09:57.133 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:57.133 "is_configured": true, 00:09:57.133 "data_offset": 0, 00:09:57.133 "data_size": 65536 00:09:57.133 }, 00:09:57.133 { 00:09:57.133 "name": "BaseBdev2", 00:09:57.133 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:57.133 "is_configured": true, 00:09:57.133 "data_offset": 0, 00:09:57.133 "data_size": 65536 00:09:57.133 }, 00:09:57.133 { 00:09:57.133 "name": "BaseBdev3", 00:09:57.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.133 "is_configured": false, 00:09:57.133 "data_offset": 0, 00:09:57.133 "data_size": 0 00:09:57.133 }, 00:09:57.133 { 00:09:57.133 "name": "BaseBdev4", 00:09:57.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.133 "is_configured": false, 00:09:57.133 "data_offset": 0, 00:09:57.133 "data_size": 0 00:09:57.133 } 00:09:57.133 ] 00:09:57.133 }' 00:09:57.133 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.133 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 [2024-11-21 04:56:14.173556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.703 BaseBdev3 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 [ 00:09:57.703 { 00:09:57.703 "name": "BaseBdev3", 00:09:57.703 "aliases": [ 00:09:57.703 "6024097a-43e4-47ed-875e-225ac79e3b30" 00:09:57.703 ], 00:09:57.703 "product_name": "Malloc disk", 00:09:57.703 "block_size": 512, 00:09:57.703 "num_blocks": 65536, 00:09:57.703 "uuid": "6024097a-43e4-47ed-875e-225ac79e3b30", 00:09:57.703 "assigned_rate_limits": { 00:09:57.703 "rw_ios_per_sec": 0, 00:09:57.703 "rw_mbytes_per_sec": 0, 00:09:57.703 "r_mbytes_per_sec": 0, 00:09:57.703 "w_mbytes_per_sec": 0 00:09:57.703 }, 00:09:57.703 "claimed": true, 00:09:57.703 "claim_type": "exclusive_write", 00:09:57.703 "zoned": false, 00:09:57.703 "supported_io_types": { 00:09:57.703 "read": true, 00:09:57.703 "write": true, 00:09:57.703 "unmap": true, 00:09:57.703 "flush": true, 00:09:57.703 "reset": true, 00:09:57.703 "nvme_admin": false, 00:09:57.703 "nvme_io": false, 00:09:57.703 "nvme_io_md": false, 00:09:57.703 "write_zeroes": true, 00:09:57.703 "zcopy": true, 00:09:57.703 "get_zone_info": false, 00:09:57.703 "zone_management": false, 00:09:57.703 "zone_append": false, 00:09:57.703 "compare": false, 00:09:57.703 "compare_and_write": false, 00:09:57.703 "abort": true, 00:09:57.703 "seek_hole": false, 00:09:57.703 "seek_data": false, 00:09:57.703 "copy": true, 00:09:57.703 "nvme_iov_md": false 00:09:57.703 }, 00:09:57.703 "memory_domains": [ 00:09:57.703 { 00:09:57.703 "dma_device_id": "system", 00:09:57.703 "dma_device_type": 1 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.703 "dma_device_type": 2 00:09:57.703 } 00:09:57.703 ], 00:09:57.703 "driver_specific": {} 00:09:57.703 } 00:09:57.703 ] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.703 "name": "Existed_Raid", 00:09:57.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.703 "strip_size_kb": 64, 00:09:57.703 "state": "configuring", 00:09:57.703 "raid_level": "concat", 00:09:57.703 "superblock": false, 00:09:57.703 "num_base_bdevs": 4, 00:09:57.703 "num_base_bdevs_discovered": 3, 00:09:57.703 "num_base_bdevs_operational": 4, 00:09:57.703 "base_bdevs_list": [ 00:09:57.703 { 00:09:57.703 "name": "BaseBdev1", 00:09:57.703 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:57.703 "is_configured": true, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 65536 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "name": "BaseBdev2", 00:09:57.703 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:57.703 "is_configured": true, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 65536 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "name": "BaseBdev3", 00:09:57.703 "uuid": "6024097a-43e4-47ed-875e-225ac79e3b30", 00:09:57.703 "is_configured": true, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 65536 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "name": "BaseBdev4", 00:09:57.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.703 "is_configured": false, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 0 00:09:57.703 } 00:09:57.703 ] 00:09:57.703 }' 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.703 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.963 [2024-11-21 04:56:14.679793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.963 [2024-11-21 04:56:14.679934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:57.963 [2024-11-21 04:56:14.679948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:57.963 [2024-11-21 04:56:14.680282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:57.963 [2024-11-21 04:56:14.680434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:57.963 [2024-11-21 04:56:14.680447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:57.963 [2024-11-21 04:56:14.680662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.963 BaseBdev4 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.963 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.223 [ 00:09:58.223 { 00:09:58.223 "name": "BaseBdev4", 00:09:58.223 "aliases": [ 00:09:58.223 "cd13ce3f-7ab7-4828-b611-a6a6a4c15c3c" 00:09:58.223 ], 00:09:58.223 "product_name": "Malloc disk", 00:09:58.223 "block_size": 512, 00:09:58.223 "num_blocks": 65536, 00:09:58.223 "uuid": "cd13ce3f-7ab7-4828-b611-a6a6a4c15c3c", 00:09:58.223 "assigned_rate_limits": { 00:09:58.223 "rw_ios_per_sec": 0, 00:09:58.223 "rw_mbytes_per_sec": 0, 00:09:58.223 "r_mbytes_per_sec": 0, 00:09:58.223 "w_mbytes_per_sec": 0 00:09:58.223 }, 00:09:58.223 "claimed": true, 00:09:58.223 "claim_type": "exclusive_write", 00:09:58.223 "zoned": false, 00:09:58.223 "supported_io_types": { 00:09:58.223 "read": true, 00:09:58.223 "write": true, 00:09:58.223 "unmap": true, 00:09:58.223 "flush": true, 00:09:58.223 "reset": true, 00:09:58.223 "nvme_admin": false, 00:09:58.223 "nvme_io": false, 00:09:58.223 "nvme_io_md": false, 00:09:58.223 "write_zeroes": true, 00:09:58.223 "zcopy": true, 00:09:58.223 "get_zone_info": false, 00:09:58.223 "zone_management": false, 00:09:58.223 "zone_append": false, 00:09:58.223 "compare": false, 00:09:58.223 "compare_and_write": false, 00:09:58.223 "abort": true, 00:09:58.223 "seek_hole": false, 00:09:58.223 "seek_data": false, 00:09:58.223 "copy": true, 00:09:58.223 "nvme_iov_md": false 00:09:58.223 }, 00:09:58.223 "memory_domains": [ 00:09:58.223 { 00:09:58.223 "dma_device_id": "system", 00:09:58.223 "dma_device_type": 1 00:09:58.223 }, 00:09:58.223 { 00:09:58.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.223 "dma_device_type": 2 00:09:58.223 } 00:09:58.223 ], 00:09:58.223 "driver_specific": {} 00:09:58.223 } 00:09:58.223 ] 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.223 "name": "Existed_Raid", 00:09:58.223 "uuid": "7a5326e5-3af3-4b9d-ace0-db55b446946a", 00:09:58.223 "strip_size_kb": 64, 00:09:58.223 "state": "online", 00:09:58.223 "raid_level": "concat", 00:09:58.223 "superblock": false, 00:09:58.223 "num_base_bdevs": 4, 00:09:58.223 "num_base_bdevs_discovered": 4, 00:09:58.223 "num_base_bdevs_operational": 4, 00:09:58.223 "base_bdevs_list": [ 00:09:58.223 { 00:09:58.223 "name": "BaseBdev1", 00:09:58.223 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:58.223 "is_configured": true, 00:09:58.223 "data_offset": 0, 00:09:58.223 "data_size": 65536 00:09:58.223 }, 00:09:58.223 { 00:09:58.223 "name": "BaseBdev2", 00:09:58.223 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:58.223 "is_configured": true, 00:09:58.223 "data_offset": 0, 00:09:58.223 "data_size": 65536 00:09:58.223 }, 00:09:58.223 { 00:09:58.223 "name": "BaseBdev3", 00:09:58.223 "uuid": "6024097a-43e4-47ed-875e-225ac79e3b30", 00:09:58.223 "is_configured": true, 00:09:58.223 "data_offset": 0, 00:09:58.223 "data_size": 65536 00:09:58.223 }, 00:09:58.223 { 00:09:58.223 "name": "BaseBdev4", 00:09:58.223 "uuid": "cd13ce3f-7ab7-4828-b611-a6a6a4c15c3c", 00:09:58.223 "is_configured": true, 00:09:58.223 "data_offset": 0, 00:09:58.223 "data_size": 65536 00:09:58.223 } 00:09:58.223 ] 00:09:58.223 }' 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.223 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.483 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.483 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.483 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.484 [2024-11-21 04:56:15.175485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.484 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.484 "name": "Existed_Raid", 00:09:58.484 "aliases": [ 00:09:58.484 "7a5326e5-3af3-4b9d-ace0-db55b446946a" 00:09:58.484 ], 00:09:58.484 "product_name": "Raid Volume", 00:09:58.484 "block_size": 512, 00:09:58.484 "num_blocks": 262144, 00:09:58.484 "uuid": "7a5326e5-3af3-4b9d-ace0-db55b446946a", 00:09:58.484 "assigned_rate_limits": { 00:09:58.484 "rw_ios_per_sec": 0, 00:09:58.484 "rw_mbytes_per_sec": 0, 00:09:58.484 "r_mbytes_per_sec": 0, 00:09:58.484 "w_mbytes_per_sec": 0 00:09:58.484 }, 00:09:58.484 "claimed": false, 00:09:58.484 "zoned": false, 00:09:58.484 "supported_io_types": { 00:09:58.484 "read": true, 00:09:58.484 "write": true, 00:09:58.484 "unmap": true, 00:09:58.484 "flush": true, 00:09:58.484 "reset": true, 00:09:58.484 "nvme_admin": false, 00:09:58.484 "nvme_io": false, 00:09:58.484 "nvme_io_md": false, 00:09:58.484 "write_zeroes": true, 00:09:58.484 "zcopy": false, 00:09:58.484 "get_zone_info": false, 00:09:58.484 "zone_management": false, 00:09:58.484 "zone_append": false, 00:09:58.484 "compare": false, 00:09:58.484 "compare_and_write": false, 00:09:58.484 "abort": false, 00:09:58.484 "seek_hole": false, 00:09:58.484 "seek_data": false, 00:09:58.484 "copy": false, 00:09:58.484 "nvme_iov_md": false 00:09:58.484 }, 00:09:58.484 "memory_domains": [ 00:09:58.484 { 00:09:58.484 "dma_device_id": "system", 00:09:58.484 "dma_device_type": 1 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.484 "dma_device_type": 2 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "system", 00:09:58.484 "dma_device_type": 1 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.484 "dma_device_type": 2 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "system", 00:09:58.484 "dma_device_type": 1 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.484 "dma_device_type": 2 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "system", 00:09:58.484 "dma_device_type": 1 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.484 "dma_device_type": 2 00:09:58.484 } 00:09:58.484 ], 00:09:58.484 "driver_specific": { 00:09:58.484 "raid": { 00:09:58.484 "uuid": "7a5326e5-3af3-4b9d-ace0-db55b446946a", 00:09:58.484 "strip_size_kb": 64, 00:09:58.484 "state": "online", 00:09:58.484 "raid_level": "concat", 00:09:58.484 "superblock": false, 00:09:58.484 "num_base_bdevs": 4, 00:09:58.484 "num_base_bdevs_discovered": 4, 00:09:58.484 "num_base_bdevs_operational": 4, 00:09:58.484 "base_bdevs_list": [ 00:09:58.484 { 00:09:58.484 "name": "BaseBdev1", 00:09:58.484 "uuid": "f2cdac2e-a54c-4a56-bacb-fcfff5b4024a", 00:09:58.484 "is_configured": true, 00:09:58.484 "data_offset": 0, 00:09:58.484 "data_size": 65536 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "name": "BaseBdev2", 00:09:58.484 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:58.484 "is_configured": true, 00:09:58.484 "data_offset": 0, 00:09:58.484 "data_size": 65536 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "name": "BaseBdev3", 00:09:58.484 "uuid": "6024097a-43e4-47ed-875e-225ac79e3b30", 00:09:58.484 "is_configured": true, 00:09:58.484 "data_offset": 0, 00:09:58.484 "data_size": 65536 00:09:58.484 }, 00:09:58.484 { 00:09:58.484 "name": "BaseBdev4", 00:09:58.484 "uuid": "cd13ce3f-7ab7-4828-b611-a6a6a4c15c3c", 00:09:58.484 "is_configured": true, 00:09:58.484 "data_offset": 0, 00:09:58.484 "data_size": 65536 00:09:58.484 } 00:09:58.484 ] 00:09:58.484 } 00:09:58.484 } 00:09:58.484 }' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:58.744 BaseBdev2 00:09:58.744 BaseBdev3 00:09:58.744 BaseBdev4' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.744 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.745 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.007 [2024-11-21 04:56:15.478623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.007 [2024-11-21 04:56:15.478652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.007 [2024-11-21 04:56:15.478705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.007 "name": "Existed_Raid", 00:09:59.007 "uuid": "7a5326e5-3af3-4b9d-ace0-db55b446946a", 00:09:59.007 "strip_size_kb": 64, 00:09:59.007 "state": "offline", 00:09:59.007 "raid_level": "concat", 00:09:59.007 "superblock": false, 00:09:59.007 "num_base_bdevs": 4, 00:09:59.007 "num_base_bdevs_discovered": 3, 00:09:59.007 "num_base_bdevs_operational": 3, 00:09:59.007 "base_bdevs_list": [ 00:09:59.007 { 00:09:59.007 "name": null, 00:09:59.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.007 "is_configured": false, 00:09:59.007 "data_offset": 0, 00:09:59.007 "data_size": 65536 00:09:59.007 }, 00:09:59.007 { 00:09:59.007 "name": "BaseBdev2", 00:09:59.007 "uuid": "4095de88-c36a-4c6f-91a9-133365408b14", 00:09:59.007 "is_configured": true, 00:09:59.007 "data_offset": 0, 00:09:59.007 "data_size": 65536 00:09:59.007 }, 00:09:59.007 { 00:09:59.007 "name": "BaseBdev3", 00:09:59.007 "uuid": "6024097a-43e4-47ed-875e-225ac79e3b30", 00:09:59.007 "is_configured": true, 00:09:59.007 "data_offset": 0, 00:09:59.007 "data_size": 65536 00:09:59.007 }, 00:09:59.007 { 00:09:59.007 "name": "BaseBdev4", 00:09:59.007 "uuid": "cd13ce3f-7ab7-4828-b611-a6a6a4c15c3c", 00:09:59.007 "is_configured": true, 00:09:59.007 "data_offset": 0, 00:09:59.007 "data_size": 65536 00:09:59.007 } 00:09:59.007 ] 00:09:59.007 }' 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.007 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.269 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.269 [2024-11-21 04:56:15.997182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.529 [2024-11-21 04:56:16.068216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.529 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.530 [2024-11-21 04:56:16.135062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:59.530 [2024-11-21 04:56:16.135178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.530 BaseBdev2 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.530 [ 00:09:59.530 { 00:09:59.530 "name": "BaseBdev2", 00:09:59.530 "aliases": [ 00:09:59.530 "c48ca711-66b6-42af-a6d4-86845320a0f2" 00:09:59.530 ], 00:09:59.530 "product_name": "Malloc disk", 00:09:59.530 "block_size": 512, 00:09:59.530 "num_blocks": 65536, 00:09:59.530 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:09:59.530 "assigned_rate_limits": { 00:09:59.530 "rw_ios_per_sec": 0, 00:09:59.530 "rw_mbytes_per_sec": 0, 00:09:59.530 "r_mbytes_per_sec": 0, 00:09:59.530 "w_mbytes_per_sec": 0 00:09:59.530 }, 00:09:59.530 "claimed": false, 00:09:59.530 "zoned": false, 00:09:59.530 "supported_io_types": { 00:09:59.530 "read": true, 00:09:59.530 "write": true, 00:09:59.530 "unmap": true, 00:09:59.530 "flush": true, 00:09:59.530 "reset": true, 00:09:59.530 "nvme_admin": false, 00:09:59.530 "nvme_io": false, 00:09:59.530 "nvme_io_md": false, 00:09:59.530 "write_zeroes": true, 00:09:59.530 "zcopy": true, 00:09:59.530 "get_zone_info": false, 00:09:59.530 "zone_management": false, 00:09:59.530 "zone_append": false, 00:09:59.530 "compare": false, 00:09:59.530 "compare_and_write": false, 00:09:59.530 "abort": true, 00:09:59.530 "seek_hole": false, 00:09:59.530 "seek_data": false, 00:09:59.530 "copy": true, 00:09:59.530 "nvme_iov_md": false 00:09:59.530 }, 00:09:59.530 "memory_domains": [ 00:09:59.530 { 00:09:59.530 "dma_device_id": "system", 00:09:59.530 "dma_device_type": 1 00:09:59.530 }, 00:09:59.530 { 00:09:59.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.530 "dma_device_type": 2 00:09:59.530 } 00:09:59.530 ], 00:09:59.530 "driver_specific": {} 00:09:59.530 } 00:09:59.530 ] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.530 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.791 BaseBdev3 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.791 [ 00:09:59.791 { 00:09:59.791 "name": "BaseBdev3", 00:09:59.791 "aliases": [ 00:09:59.791 "3dbc3609-9e4e-4db7-b62a-9e295908c852" 00:09:59.791 ], 00:09:59.791 "product_name": "Malloc disk", 00:09:59.791 "block_size": 512, 00:09:59.791 "num_blocks": 65536, 00:09:59.791 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:09:59.791 "assigned_rate_limits": { 00:09:59.791 "rw_ios_per_sec": 0, 00:09:59.791 "rw_mbytes_per_sec": 0, 00:09:59.791 "r_mbytes_per_sec": 0, 00:09:59.791 "w_mbytes_per_sec": 0 00:09:59.791 }, 00:09:59.791 "claimed": false, 00:09:59.791 "zoned": false, 00:09:59.791 "supported_io_types": { 00:09:59.791 "read": true, 00:09:59.791 "write": true, 00:09:59.791 "unmap": true, 00:09:59.791 "flush": true, 00:09:59.791 "reset": true, 00:09:59.791 "nvme_admin": false, 00:09:59.791 "nvme_io": false, 00:09:59.791 "nvme_io_md": false, 00:09:59.791 "write_zeroes": true, 00:09:59.791 "zcopy": true, 00:09:59.791 "get_zone_info": false, 00:09:59.791 "zone_management": false, 00:09:59.791 "zone_append": false, 00:09:59.791 "compare": false, 00:09:59.791 "compare_and_write": false, 00:09:59.791 "abort": true, 00:09:59.791 "seek_hole": false, 00:09:59.791 "seek_data": false, 00:09:59.791 "copy": true, 00:09:59.791 "nvme_iov_md": false 00:09:59.791 }, 00:09:59.791 "memory_domains": [ 00:09:59.791 { 00:09:59.791 "dma_device_id": "system", 00:09:59.791 "dma_device_type": 1 00:09:59.791 }, 00:09:59.791 { 00:09:59.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.791 "dma_device_type": 2 00:09:59.791 } 00:09:59.791 ], 00:09:59.791 "driver_specific": {} 00:09:59.791 } 00:09:59.791 ] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.791 BaseBdev4 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.791 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.792 [ 00:09:59.792 { 00:09:59.792 "name": "BaseBdev4", 00:09:59.792 "aliases": [ 00:09:59.792 "5b8cb3d4-9a14-420e-925d-361cbef776e8" 00:09:59.792 ], 00:09:59.792 "product_name": "Malloc disk", 00:09:59.792 "block_size": 512, 00:09:59.792 "num_blocks": 65536, 00:09:59.792 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:09:59.792 "assigned_rate_limits": { 00:09:59.792 "rw_ios_per_sec": 0, 00:09:59.792 "rw_mbytes_per_sec": 0, 00:09:59.792 "r_mbytes_per_sec": 0, 00:09:59.792 "w_mbytes_per_sec": 0 00:09:59.792 }, 00:09:59.792 "claimed": false, 00:09:59.792 "zoned": false, 00:09:59.792 "supported_io_types": { 00:09:59.792 "read": true, 00:09:59.792 "write": true, 00:09:59.792 "unmap": true, 00:09:59.792 "flush": true, 00:09:59.792 "reset": true, 00:09:59.792 "nvme_admin": false, 00:09:59.792 "nvme_io": false, 00:09:59.792 "nvme_io_md": false, 00:09:59.792 "write_zeroes": true, 00:09:59.792 "zcopy": true, 00:09:59.792 "get_zone_info": false, 00:09:59.792 "zone_management": false, 00:09:59.792 "zone_append": false, 00:09:59.792 "compare": false, 00:09:59.792 "compare_and_write": false, 00:09:59.792 "abort": true, 00:09:59.792 "seek_hole": false, 00:09:59.792 "seek_data": false, 00:09:59.792 "copy": true, 00:09:59.792 "nvme_iov_md": false 00:09:59.792 }, 00:09:59.792 "memory_domains": [ 00:09:59.792 { 00:09:59.792 "dma_device_id": "system", 00:09:59.792 "dma_device_type": 1 00:09:59.792 }, 00:09:59.792 { 00:09:59.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.792 "dma_device_type": 2 00:09:59.792 } 00:09:59.792 ], 00:09:59.792 "driver_specific": {} 00:09:59.792 } 00:09:59.792 ] 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.792 [2024-11-21 04:56:16.365883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.792 [2024-11-21 04:56:16.365924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.792 [2024-11-21 04:56:16.365943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.792 [2024-11-21 04:56:16.367755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.792 [2024-11-21 04:56:16.367860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.792 "name": "Existed_Raid", 00:09:59.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.792 "strip_size_kb": 64, 00:09:59.792 "state": "configuring", 00:09:59.792 "raid_level": "concat", 00:09:59.792 "superblock": false, 00:09:59.792 "num_base_bdevs": 4, 00:09:59.792 "num_base_bdevs_discovered": 3, 00:09:59.792 "num_base_bdevs_operational": 4, 00:09:59.792 "base_bdevs_list": [ 00:09:59.792 { 00:09:59.792 "name": "BaseBdev1", 00:09:59.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.792 "is_configured": false, 00:09:59.792 "data_offset": 0, 00:09:59.792 "data_size": 0 00:09:59.792 }, 00:09:59.792 { 00:09:59.792 "name": "BaseBdev2", 00:09:59.792 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:09:59.792 "is_configured": true, 00:09:59.792 "data_offset": 0, 00:09:59.792 "data_size": 65536 00:09:59.792 }, 00:09:59.792 { 00:09:59.792 "name": "BaseBdev3", 00:09:59.792 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:09:59.792 "is_configured": true, 00:09:59.792 "data_offset": 0, 00:09:59.792 "data_size": 65536 00:09:59.792 }, 00:09:59.792 { 00:09:59.792 "name": "BaseBdev4", 00:09:59.792 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:09:59.792 "is_configured": true, 00:09:59.792 "data_offset": 0, 00:09:59.792 "data_size": 65536 00:09:59.792 } 00:09:59.792 ] 00:09:59.792 }' 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.792 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.362 [2024-11-21 04:56:16.849083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.362 "name": "Existed_Raid", 00:10:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.362 "strip_size_kb": 64, 00:10:00.362 "state": "configuring", 00:10:00.362 "raid_level": "concat", 00:10:00.362 "superblock": false, 00:10:00.362 "num_base_bdevs": 4, 00:10:00.362 "num_base_bdevs_discovered": 2, 00:10:00.362 "num_base_bdevs_operational": 4, 00:10:00.362 "base_bdevs_list": [ 00:10:00.362 { 00:10:00.362 "name": "BaseBdev1", 00:10:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.362 "is_configured": false, 00:10:00.362 "data_offset": 0, 00:10:00.362 "data_size": 0 00:10:00.362 }, 00:10:00.362 { 00:10:00.362 "name": null, 00:10:00.362 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:00.362 "is_configured": false, 00:10:00.362 "data_offset": 0, 00:10:00.362 "data_size": 65536 00:10:00.362 }, 00:10:00.362 { 00:10:00.362 "name": "BaseBdev3", 00:10:00.362 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:00.362 "is_configured": true, 00:10:00.362 "data_offset": 0, 00:10:00.362 "data_size": 65536 00:10:00.362 }, 00:10:00.362 { 00:10:00.362 "name": "BaseBdev4", 00:10:00.362 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:00.362 "is_configured": true, 00:10:00.362 "data_offset": 0, 00:10:00.362 "data_size": 65536 00:10:00.362 } 00:10:00.362 ] 00:10:00.362 }' 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.362 04:56:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.623 [2024-11-21 04:56:17.343220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.623 BaseBdev1 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.623 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.883 [ 00:10:00.883 { 00:10:00.883 "name": "BaseBdev1", 00:10:00.883 "aliases": [ 00:10:00.883 "33de5b8b-384a-4a7f-bde0-fd97c8a3262f" 00:10:00.883 ], 00:10:00.883 "product_name": "Malloc disk", 00:10:00.883 "block_size": 512, 00:10:00.883 "num_blocks": 65536, 00:10:00.883 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:00.883 "assigned_rate_limits": { 00:10:00.883 "rw_ios_per_sec": 0, 00:10:00.883 "rw_mbytes_per_sec": 0, 00:10:00.883 "r_mbytes_per_sec": 0, 00:10:00.883 "w_mbytes_per_sec": 0 00:10:00.883 }, 00:10:00.883 "claimed": true, 00:10:00.883 "claim_type": "exclusive_write", 00:10:00.883 "zoned": false, 00:10:00.883 "supported_io_types": { 00:10:00.883 "read": true, 00:10:00.883 "write": true, 00:10:00.883 "unmap": true, 00:10:00.883 "flush": true, 00:10:00.883 "reset": true, 00:10:00.883 "nvme_admin": false, 00:10:00.883 "nvme_io": false, 00:10:00.883 "nvme_io_md": false, 00:10:00.883 "write_zeroes": true, 00:10:00.883 "zcopy": true, 00:10:00.883 "get_zone_info": false, 00:10:00.883 "zone_management": false, 00:10:00.883 "zone_append": false, 00:10:00.883 "compare": false, 00:10:00.883 "compare_and_write": false, 00:10:00.883 "abort": true, 00:10:00.883 "seek_hole": false, 00:10:00.883 "seek_data": false, 00:10:00.883 "copy": true, 00:10:00.883 "nvme_iov_md": false 00:10:00.883 }, 00:10:00.883 "memory_domains": [ 00:10:00.883 { 00:10:00.883 "dma_device_id": "system", 00:10:00.883 "dma_device_type": 1 00:10:00.883 }, 00:10:00.883 { 00:10:00.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.883 "dma_device_type": 2 00:10:00.883 } 00:10:00.883 ], 00:10:00.883 "driver_specific": {} 00:10:00.883 } 00:10:00.883 ] 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.883 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.883 "name": "Existed_Raid", 00:10:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.883 "strip_size_kb": 64, 00:10:00.883 "state": "configuring", 00:10:00.883 "raid_level": "concat", 00:10:00.883 "superblock": false, 00:10:00.883 "num_base_bdevs": 4, 00:10:00.883 "num_base_bdevs_discovered": 3, 00:10:00.883 "num_base_bdevs_operational": 4, 00:10:00.883 "base_bdevs_list": [ 00:10:00.883 { 00:10:00.883 "name": "BaseBdev1", 00:10:00.883 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:00.883 "is_configured": true, 00:10:00.884 "data_offset": 0, 00:10:00.884 "data_size": 65536 00:10:00.884 }, 00:10:00.884 { 00:10:00.884 "name": null, 00:10:00.884 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:00.884 "is_configured": false, 00:10:00.884 "data_offset": 0, 00:10:00.884 "data_size": 65536 00:10:00.884 }, 00:10:00.884 { 00:10:00.884 "name": "BaseBdev3", 00:10:00.884 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:00.884 "is_configured": true, 00:10:00.884 "data_offset": 0, 00:10:00.884 "data_size": 65536 00:10:00.884 }, 00:10:00.884 { 00:10:00.884 "name": "BaseBdev4", 00:10:00.884 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:00.884 "is_configured": true, 00:10:00.884 "data_offset": 0, 00:10:00.884 "data_size": 65536 00:10:00.884 } 00:10:00.884 ] 00:10:00.884 }' 00:10:00.884 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.884 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.143 [2024-11-21 04:56:17.862477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.143 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.402 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.402 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.402 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.402 "name": "Existed_Raid", 00:10:01.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.402 "strip_size_kb": 64, 00:10:01.402 "state": "configuring", 00:10:01.402 "raid_level": "concat", 00:10:01.402 "superblock": false, 00:10:01.402 "num_base_bdevs": 4, 00:10:01.402 "num_base_bdevs_discovered": 2, 00:10:01.402 "num_base_bdevs_operational": 4, 00:10:01.402 "base_bdevs_list": [ 00:10:01.402 { 00:10:01.402 "name": "BaseBdev1", 00:10:01.402 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:01.402 "is_configured": true, 00:10:01.402 "data_offset": 0, 00:10:01.402 "data_size": 65536 00:10:01.402 }, 00:10:01.402 { 00:10:01.402 "name": null, 00:10:01.402 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:01.403 "is_configured": false, 00:10:01.403 "data_offset": 0, 00:10:01.403 "data_size": 65536 00:10:01.403 }, 00:10:01.403 { 00:10:01.403 "name": null, 00:10:01.403 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:01.403 "is_configured": false, 00:10:01.403 "data_offset": 0, 00:10:01.403 "data_size": 65536 00:10:01.403 }, 00:10:01.403 { 00:10:01.403 "name": "BaseBdev4", 00:10:01.403 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:01.403 "is_configured": true, 00:10:01.403 "data_offset": 0, 00:10:01.403 "data_size": 65536 00:10:01.403 } 00:10:01.403 ] 00:10:01.403 }' 00:10:01.403 04:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.403 04:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.662 [2024-11-21 04:56:18.377646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.662 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.923 "name": "Existed_Raid", 00:10:01.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.923 "strip_size_kb": 64, 00:10:01.923 "state": "configuring", 00:10:01.923 "raid_level": "concat", 00:10:01.923 "superblock": false, 00:10:01.923 "num_base_bdevs": 4, 00:10:01.923 "num_base_bdevs_discovered": 3, 00:10:01.923 "num_base_bdevs_operational": 4, 00:10:01.923 "base_bdevs_list": [ 00:10:01.923 { 00:10:01.923 "name": "BaseBdev1", 00:10:01.923 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:01.923 "is_configured": true, 00:10:01.923 "data_offset": 0, 00:10:01.923 "data_size": 65536 00:10:01.923 }, 00:10:01.923 { 00:10:01.923 "name": null, 00:10:01.923 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:01.923 "is_configured": false, 00:10:01.923 "data_offset": 0, 00:10:01.923 "data_size": 65536 00:10:01.923 }, 00:10:01.923 { 00:10:01.923 "name": "BaseBdev3", 00:10:01.923 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:01.923 "is_configured": true, 00:10:01.923 "data_offset": 0, 00:10:01.923 "data_size": 65536 00:10:01.923 }, 00:10:01.923 { 00:10:01.923 "name": "BaseBdev4", 00:10:01.923 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:01.923 "is_configured": true, 00:10:01.923 "data_offset": 0, 00:10:01.923 "data_size": 65536 00:10:01.923 } 00:10:01.923 ] 00:10:01.923 }' 00:10:01.923 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.923 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.182 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 [2024-11-21 04:56:18.912740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.441 "name": "Existed_Raid", 00:10:02.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.441 "strip_size_kb": 64, 00:10:02.441 "state": "configuring", 00:10:02.441 "raid_level": "concat", 00:10:02.441 "superblock": false, 00:10:02.441 "num_base_bdevs": 4, 00:10:02.441 "num_base_bdevs_discovered": 2, 00:10:02.441 "num_base_bdevs_operational": 4, 00:10:02.441 "base_bdevs_list": [ 00:10:02.441 { 00:10:02.441 "name": null, 00:10:02.441 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:02.441 "is_configured": false, 00:10:02.441 "data_offset": 0, 00:10:02.441 "data_size": 65536 00:10:02.441 }, 00:10:02.441 { 00:10:02.441 "name": null, 00:10:02.441 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:02.441 "is_configured": false, 00:10:02.441 "data_offset": 0, 00:10:02.441 "data_size": 65536 00:10:02.441 }, 00:10:02.441 { 00:10:02.441 "name": "BaseBdev3", 00:10:02.441 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:02.441 "is_configured": true, 00:10:02.441 "data_offset": 0, 00:10:02.441 "data_size": 65536 00:10:02.441 }, 00:10:02.441 { 00:10:02.441 "name": "BaseBdev4", 00:10:02.441 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:02.441 "is_configured": true, 00:10:02.441 "data_offset": 0, 00:10:02.441 "data_size": 65536 00:10:02.441 } 00:10:02.441 ] 00:10:02.441 }' 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.441 04:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.698 [2024-11-21 04:56:19.374555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.698 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.699 "name": "Existed_Raid", 00:10:02.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.699 "strip_size_kb": 64, 00:10:02.699 "state": "configuring", 00:10:02.699 "raid_level": "concat", 00:10:02.699 "superblock": false, 00:10:02.699 "num_base_bdevs": 4, 00:10:02.699 "num_base_bdevs_discovered": 3, 00:10:02.699 "num_base_bdevs_operational": 4, 00:10:02.699 "base_bdevs_list": [ 00:10:02.699 { 00:10:02.699 "name": null, 00:10:02.699 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:02.699 "is_configured": false, 00:10:02.699 "data_offset": 0, 00:10:02.699 "data_size": 65536 00:10:02.699 }, 00:10:02.699 { 00:10:02.699 "name": "BaseBdev2", 00:10:02.699 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:02.699 "is_configured": true, 00:10:02.699 "data_offset": 0, 00:10:02.699 "data_size": 65536 00:10:02.699 }, 00:10:02.699 { 00:10:02.699 "name": "BaseBdev3", 00:10:02.699 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:02.699 "is_configured": true, 00:10:02.699 "data_offset": 0, 00:10:02.699 "data_size": 65536 00:10:02.699 }, 00:10:02.699 { 00:10:02.699 "name": "BaseBdev4", 00:10:02.699 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:02.699 "is_configured": true, 00:10:02.699 "data_offset": 0, 00:10:02.699 "data_size": 65536 00:10:02.699 } 00:10:02.699 ] 00:10:02.699 }' 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.699 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 33de5b8b-384a-4a7f-bde0-fd97c8a3262f 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 [2024-11-21 04:56:19.896727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.267 [2024-11-21 04:56:19.896844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:03.267 [2024-11-21 04:56:19.896870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.267 [2024-11-21 04:56:19.897222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.267 [2024-11-21 04:56:19.897395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:03.267 [2024-11-21 04:56:19.897440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:03.267 [2024-11-21 04:56:19.897699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.267 NewBaseBdev 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.267 [ 00:10:03.267 { 00:10:03.267 "name": "NewBaseBdev", 00:10:03.267 "aliases": [ 00:10:03.267 "33de5b8b-384a-4a7f-bde0-fd97c8a3262f" 00:10:03.267 ], 00:10:03.267 "product_name": "Malloc disk", 00:10:03.267 "block_size": 512, 00:10:03.267 "num_blocks": 65536, 00:10:03.267 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:03.267 "assigned_rate_limits": { 00:10:03.267 "rw_ios_per_sec": 0, 00:10:03.267 "rw_mbytes_per_sec": 0, 00:10:03.267 "r_mbytes_per_sec": 0, 00:10:03.267 "w_mbytes_per_sec": 0 00:10:03.267 }, 00:10:03.267 "claimed": true, 00:10:03.267 "claim_type": "exclusive_write", 00:10:03.267 "zoned": false, 00:10:03.267 "supported_io_types": { 00:10:03.267 "read": true, 00:10:03.267 "write": true, 00:10:03.267 "unmap": true, 00:10:03.267 "flush": true, 00:10:03.267 "reset": true, 00:10:03.267 "nvme_admin": false, 00:10:03.267 "nvme_io": false, 00:10:03.267 "nvme_io_md": false, 00:10:03.267 "write_zeroes": true, 00:10:03.267 "zcopy": true, 00:10:03.267 "get_zone_info": false, 00:10:03.267 "zone_management": false, 00:10:03.267 "zone_append": false, 00:10:03.267 "compare": false, 00:10:03.267 "compare_and_write": false, 00:10:03.267 "abort": true, 00:10:03.267 "seek_hole": false, 00:10:03.267 "seek_data": false, 00:10:03.267 "copy": true, 00:10:03.267 "nvme_iov_md": false 00:10:03.267 }, 00:10:03.267 "memory_domains": [ 00:10:03.267 { 00:10:03.267 "dma_device_id": "system", 00:10:03.267 "dma_device_type": 1 00:10:03.267 }, 00:10:03.267 { 00:10:03.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.267 "dma_device_type": 2 00:10:03.267 } 00:10:03.267 ], 00:10:03.267 "driver_specific": {} 00:10:03.267 } 00:10:03.267 ] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.267 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.268 "name": "Existed_Raid", 00:10:03.268 "uuid": "3d1041e5-e6e3-43a7-8887-6546b40bbacf", 00:10:03.268 "strip_size_kb": 64, 00:10:03.268 "state": "online", 00:10:03.268 "raid_level": "concat", 00:10:03.268 "superblock": false, 00:10:03.268 "num_base_bdevs": 4, 00:10:03.268 "num_base_bdevs_discovered": 4, 00:10:03.268 "num_base_bdevs_operational": 4, 00:10:03.268 "base_bdevs_list": [ 00:10:03.268 { 00:10:03.268 "name": "NewBaseBdev", 00:10:03.268 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:03.268 "is_configured": true, 00:10:03.268 "data_offset": 0, 00:10:03.268 "data_size": 65536 00:10:03.268 }, 00:10:03.268 { 00:10:03.268 "name": "BaseBdev2", 00:10:03.268 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:03.268 "is_configured": true, 00:10:03.268 "data_offset": 0, 00:10:03.268 "data_size": 65536 00:10:03.268 }, 00:10:03.268 { 00:10:03.268 "name": "BaseBdev3", 00:10:03.268 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:03.268 "is_configured": true, 00:10:03.268 "data_offset": 0, 00:10:03.268 "data_size": 65536 00:10:03.268 }, 00:10:03.268 { 00:10:03.268 "name": "BaseBdev4", 00:10:03.268 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:03.268 "is_configured": true, 00:10:03.268 "data_offset": 0, 00:10:03.268 "data_size": 65536 00:10:03.268 } 00:10:03.268 ] 00:10:03.268 }' 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.268 04:56:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.838 [2024-11-21 04:56:20.404325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.838 "name": "Existed_Raid", 00:10:03.838 "aliases": [ 00:10:03.838 "3d1041e5-e6e3-43a7-8887-6546b40bbacf" 00:10:03.838 ], 00:10:03.838 "product_name": "Raid Volume", 00:10:03.838 "block_size": 512, 00:10:03.838 "num_blocks": 262144, 00:10:03.838 "uuid": "3d1041e5-e6e3-43a7-8887-6546b40bbacf", 00:10:03.838 "assigned_rate_limits": { 00:10:03.838 "rw_ios_per_sec": 0, 00:10:03.838 "rw_mbytes_per_sec": 0, 00:10:03.838 "r_mbytes_per_sec": 0, 00:10:03.838 "w_mbytes_per_sec": 0 00:10:03.838 }, 00:10:03.838 "claimed": false, 00:10:03.838 "zoned": false, 00:10:03.838 "supported_io_types": { 00:10:03.838 "read": true, 00:10:03.838 "write": true, 00:10:03.838 "unmap": true, 00:10:03.838 "flush": true, 00:10:03.838 "reset": true, 00:10:03.838 "nvme_admin": false, 00:10:03.838 "nvme_io": false, 00:10:03.838 "nvme_io_md": false, 00:10:03.838 "write_zeroes": true, 00:10:03.838 "zcopy": false, 00:10:03.838 "get_zone_info": false, 00:10:03.838 "zone_management": false, 00:10:03.838 "zone_append": false, 00:10:03.838 "compare": false, 00:10:03.838 "compare_and_write": false, 00:10:03.838 "abort": false, 00:10:03.838 "seek_hole": false, 00:10:03.838 "seek_data": false, 00:10:03.838 "copy": false, 00:10:03.838 "nvme_iov_md": false 00:10:03.838 }, 00:10:03.838 "memory_domains": [ 00:10:03.838 { 00:10:03.838 "dma_device_id": "system", 00:10:03.838 "dma_device_type": 1 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.838 "dma_device_type": 2 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "system", 00:10:03.838 "dma_device_type": 1 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.838 "dma_device_type": 2 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "system", 00:10:03.838 "dma_device_type": 1 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.838 "dma_device_type": 2 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "system", 00:10:03.838 "dma_device_type": 1 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.838 "dma_device_type": 2 00:10:03.838 } 00:10:03.838 ], 00:10:03.838 "driver_specific": { 00:10:03.838 "raid": { 00:10:03.838 "uuid": "3d1041e5-e6e3-43a7-8887-6546b40bbacf", 00:10:03.838 "strip_size_kb": 64, 00:10:03.838 "state": "online", 00:10:03.838 "raid_level": "concat", 00:10:03.838 "superblock": false, 00:10:03.838 "num_base_bdevs": 4, 00:10:03.838 "num_base_bdevs_discovered": 4, 00:10:03.838 "num_base_bdevs_operational": 4, 00:10:03.838 "base_bdevs_list": [ 00:10:03.838 { 00:10:03.838 "name": "NewBaseBdev", 00:10:03.838 "uuid": "33de5b8b-384a-4a7f-bde0-fd97c8a3262f", 00:10:03.838 "is_configured": true, 00:10:03.838 "data_offset": 0, 00:10:03.838 "data_size": 65536 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "name": "BaseBdev2", 00:10:03.838 "uuid": "c48ca711-66b6-42af-a6d4-86845320a0f2", 00:10:03.838 "is_configured": true, 00:10:03.838 "data_offset": 0, 00:10:03.838 "data_size": 65536 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "name": "BaseBdev3", 00:10:03.838 "uuid": "3dbc3609-9e4e-4db7-b62a-9e295908c852", 00:10:03.838 "is_configured": true, 00:10:03.838 "data_offset": 0, 00:10:03.838 "data_size": 65536 00:10:03.838 }, 00:10:03.838 { 00:10:03.838 "name": "BaseBdev4", 00:10:03.838 "uuid": "5b8cb3d4-9a14-420e-925d-361cbef776e8", 00:10:03.838 "is_configured": true, 00:10:03.838 "data_offset": 0, 00:10:03.838 "data_size": 65536 00:10:03.838 } 00:10:03.838 ] 00:10:03.838 } 00:10:03.838 } 00:10:03.838 }' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:03.838 BaseBdev2 00:10:03.838 BaseBdev3 00:10:03.838 BaseBdev4' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.838 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.098 [2024-11-21 04:56:20.679451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.098 [2024-11-21 04:56:20.679520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.098 [2024-11-21 04:56:20.679600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.098 [2024-11-21 04:56:20.679669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.098 [2024-11-21 04:56:20.679684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82320 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82320 ']' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82320 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82320 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.098 killing process with pid 82320 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82320' 00:10:04.098 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82320 00:10:04.099 [2024-11-21 04:56:20.731066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.099 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82320 00:10:04.099 [2024-11-21 04:56:20.770335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.358 04:56:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.358 ************************************ 00:10:04.358 END TEST raid_state_function_test 00:10:04.358 ************************************ 00:10:04.358 00:10:04.358 real 0m9.571s 00:10:04.358 user 0m16.365s 00:10:04.358 sys 0m2.060s 00:10:04.358 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.358 04:56:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.358 04:56:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:04.358 04:56:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.358 04:56:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.358 04:56:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.358 ************************************ 00:10:04.358 START TEST raid_state_function_test_sb 00:10:04.358 ************************************ 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.358 Process raid pid: 82975 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82975 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82975' 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82975 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82975 ']' 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.358 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.617 [2024-11-21 04:56:21.141497] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:04.617 [2024-11-21 04:56:21.141708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.617 [2024-11-21 04:56:21.311248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.618 [2024-11-21 04:56:21.336090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.877 [2024-11-21 04:56:21.377802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.877 [2024-11-21 04:56:21.377917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.460 [2024-11-21 04:56:21.974709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.460 [2024-11-21 04:56:21.974763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.460 [2024-11-21 04:56:21.974773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.460 [2024-11-21 04:56:21.974782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.460 [2024-11-21 04:56:21.974788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.460 [2024-11-21 04:56:21.974798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.460 [2024-11-21 04:56:21.974804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.460 [2024-11-21 04:56:21.974812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.460 04:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.460 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.460 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.460 "name": "Existed_Raid", 00:10:05.460 "uuid": "f0724cf6-68a1-46a5-9230-e145521d0091", 00:10:05.460 "strip_size_kb": 64, 00:10:05.460 "state": "configuring", 00:10:05.460 "raid_level": "concat", 00:10:05.460 "superblock": true, 00:10:05.460 "num_base_bdevs": 4, 00:10:05.460 "num_base_bdevs_discovered": 0, 00:10:05.460 "num_base_bdevs_operational": 4, 00:10:05.460 "base_bdevs_list": [ 00:10:05.460 { 00:10:05.460 "name": "BaseBdev1", 00:10:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.460 "is_configured": false, 00:10:05.460 "data_offset": 0, 00:10:05.460 "data_size": 0 00:10:05.460 }, 00:10:05.460 { 00:10:05.460 "name": "BaseBdev2", 00:10:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.460 "is_configured": false, 00:10:05.460 "data_offset": 0, 00:10:05.460 "data_size": 0 00:10:05.460 }, 00:10:05.460 { 00:10:05.460 "name": "BaseBdev3", 00:10:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.460 "is_configured": false, 00:10:05.460 "data_offset": 0, 00:10:05.460 "data_size": 0 00:10:05.460 }, 00:10:05.460 { 00:10:05.460 "name": "BaseBdev4", 00:10:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.460 "is_configured": false, 00:10:05.460 "data_offset": 0, 00:10:05.460 "data_size": 0 00:10:05.460 } 00:10:05.460 ] 00:10:05.460 }' 00:10:05.460 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.460 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 [2024-11-21 04:56:22.457767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.026 [2024-11-21 04:56:22.457849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 [2024-11-21 04:56:22.469766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.026 [2024-11-21 04:56:22.469856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.026 [2024-11-21 04:56:22.469883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.026 [2024-11-21 04:56:22.469906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.026 [2024-11-21 04:56:22.469923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.026 [2024-11-21 04:56:22.469943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.026 [2024-11-21 04:56:22.469960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.026 [2024-11-21 04:56:22.470018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 [2024-11-21 04:56:22.490386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.026 BaseBdev1 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 [ 00:10:06.026 { 00:10:06.026 "name": "BaseBdev1", 00:10:06.026 "aliases": [ 00:10:06.026 "9af855fb-3e44-4ad8-9465-bcc6c4148281" 00:10:06.026 ], 00:10:06.026 "product_name": "Malloc disk", 00:10:06.026 "block_size": 512, 00:10:06.026 "num_blocks": 65536, 00:10:06.026 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:06.026 "assigned_rate_limits": { 00:10:06.026 "rw_ios_per_sec": 0, 00:10:06.026 "rw_mbytes_per_sec": 0, 00:10:06.026 "r_mbytes_per_sec": 0, 00:10:06.026 "w_mbytes_per_sec": 0 00:10:06.026 }, 00:10:06.026 "claimed": true, 00:10:06.026 "claim_type": "exclusive_write", 00:10:06.026 "zoned": false, 00:10:06.026 "supported_io_types": { 00:10:06.026 "read": true, 00:10:06.026 "write": true, 00:10:06.026 "unmap": true, 00:10:06.026 "flush": true, 00:10:06.026 "reset": true, 00:10:06.026 "nvme_admin": false, 00:10:06.026 "nvme_io": false, 00:10:06.026 "nvme_io_md": false, 00:10:06.026 "write_zeroes": true, 00:10:06.026 "zcopy": true, 00:10:06.026 "get_zone_info": false, 00:10:06.026 "zone_management": false, 00:10:06.026 "zone_append": false, 00:10:06.026 "compare": false, 00:10:06.026 "compare_and_write": false, 00:10:06.026 "abort": true, 00:10:06.026 "seek_hole": false, 00:10:06.026 "seek_data": false, 00:10:06.026 "copy": true, 00:10:06.026 "nvme_iov_md": false 00:10:06.026 }, 00:10:06.026 "memory_domains": [ 00:10:06.026 { 00:10:06.026 "dma_device_id": "system", 00:10:06.026 "dma_device_type": 1 00:10:06.026 }, 00:10:06.026 { 00:10:06.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.026 "dma_device_type": 2 00:10:06.026 } 00:10:06.026 ], 00:10:06.026 "driver_specific": {} 00:10:06.026 } 00:10:06.026 ] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.026 "name": "Existed_Raid", 00:10:06.026 "uuid": "bff5c821-9f2e-4e3a-b04e-ff0ca7668423", 00:10:06.026 "strip_size_kb": 64, 00:10:06.026 "state": "configuring", 00:10:06.026 "raid_level": "concat", 00:10:06.026 "superblock": true, 00:10:06.026 "num_base_bdevs": 4, 00:10:06.026 "num_base_bdevs_discovered": 1, 00:10:06.026 "num_base_bdevs_operational": 4, 00:10:06.026 "base_bdevs_list": [ 00:10:06.026 { 00:10:06.026 "name": "BaseBdev1", 00:10:06.026 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:06.026 "is_configured": true, 00:10:06.026 "data_offset": 2048, 00:10:06.026 "data_size": 63488 00:10:06.026 }, 00:10:06.026 { 00:10:06.026 "name": "BaseBdev2", 00:10:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.026 "is_configured": false, 00:10:06.026 "data_offset": 0, 00:10:06.026 "data_size": 0 00:10:06.026 }, 00:10:06.026 { 00:10:06.026 "name": "BaseBdev3", 00:10:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.026 "is_configured": false, 00:10:06.026 "data_offset": 0, 00:10:06.026 "data_size": 0 00:10:06.026 }, 00:10:06.026 { 00:10:06.026 "name": "BaseBdev4", 00:10:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.026 "is_configured": false, 00:10:06.026 "data_offset": 0, 00:10:06.026 "data_size": 0 00:10:06.026 } 00:10:06.026 ] 00:10:06.026 }' 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.026 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.285 [2024-11-21 04:56:22.957583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.285 [2024-11-21 04:56:22.957627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.285 [2024-11-21 04:56:22.965617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.285 [2024-11-21 04:56:22.967564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.285 [2024-11-21 04:56:22.967605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.285 [2024-11-21 04:56:22.967617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.285 [2024-11-21 04:56:22.967626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.285 [2024-11-21 04:56:22.967634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.285 [2024-11-21 04:56:22.967643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.285 04:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.544 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.544 "name": "Existed_Raid", 00:10:06.544 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:06.544 "strip_size_kb": 64, 00:10:06.544 "state": "configuring", 00:10:06.544 "raid_level": "concat", 00:10:06.544 "superblock": true, 00:10:06.544 "num_base_bdevs": 4, 00:10:06.544 "num_base_bdevs_discovered": 1, 00:10:06.544 "num_base_bdevs_operational": 4, 00:10:06.544 "base_bdevs_list": [ 00:10:06.544 { 00:10:06.544 "name": "BaseBdev1", 00:10:06.544 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:06.544 "is_configured": true, 00:10:06.544 "data_offset": 2048, 00:10:06.544 "data_size": 63488 00:10:06.544 }, 00:10:06.544 { 00:10:06.544 "name": "BaseBdev2", 00:10:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.544 "is_configured": false, 00:10:06.544 "data_offset": 0, 00:10:06.544 "data_size": 0 00:10:06.544 }, 00:10:06.544 { 00:10:06.544 "name": "BaseBdev3", 00:10:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.544 "is_configured": false, 00:10:06.544 "data_offset": 0, 00:10:06.544 "data_size": 0 00:10:06.544 }, 00:10:06.544 { 00:10:06.544 "name": "BaseBdev4", 00:10:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.544 "is_configured": false, 00:10:06.544 "data_offset": 0, 00:10:06.544 "data_size": 0 00:10:06.544 } 00:10:06.544 ] 00:10:06.544 }' 00:10:06.544 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.544 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.803 [2024-11-21 04:56:23.415852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.803 BaseBdev2 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.803 [ 00:10:06.803 { 00:10:06.803 "name": "BaseBdev2", 00:10:06.803 "aliases": [ 00:10:06.803 "9d030df7-f11f-44a1-8290-c92ac8d7d5eb" 00:10:06.803 ], 00:10:06.803 "product_name": "Malloc disk", 00:10:06.803 "block_size": 512, 00:10:06.803 "num_blocks": 65536, 00:10:06.803 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:06.803 "assigned_rate_limits": { 00:10:06.803 "rw_ios_per_sec": 0, 00:10:06.803 "rw_mbytes_per_sec": 0, 00:10:06.803 "r_mbytes_per_sec": 0, 00:10:06.803 "w_mbytes_per_sec": 0 00:10:06.803 }, 00:10:06.803 "claimed": true, 00:10:06.803 "claim_type": "exclusive_write", 00:10:06.803 "zoned": false, 00:10:06.803 "supported_io_types": { 00:10:06.803 "read": true, 00:10:06.803 "write": true, 00:10:06.803 "unmap": true, 00:10:06.803 "flush": true, 00:10:06.803 "reset": true, 00:10:06.803 "nvme_admin": false, 00:10:06.803 "nvme_io": false, 00:10:06.803 "nvme_io_md": false, 00:10:06.803 "write_zeroes": true, 00:10:06.803 "zcopy": true, 00:10:06.803 "get_zone_info": false, 00:10:06.803 "zone_management": false, 00:10:06.803 "zone_append": false, 00:10:06.803 "compare": false, 00:10:06.803 "compare_and_write": false, 00:10:06.803 "abort": true, 00:10:06.803 "seek_hole": false, 00:10:06.803 "seek_data": false, 00:10:06.803 "copy": true, 00:10:06.803 "nvme_iov_md": false 00:10:06.803 }, 00:10:06.803 "memory_domains": [ 00:10:06.803 { 00:10:06.803 "dma_device_id": "system", 00:10:06.803 "dma_device_type": 1 00:10:06.803 }, 00:10:06.803 { 00:10:06.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.803 "dma_device_type": 2 00:10:06.803 } 00:10:06.803 ], 00:10:06.803 "driver_specific": {} 00:10:06.803 } 00:10:06.803 ] 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.803 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.804 "name": "Existed_Raid", 00:10:06.804 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:06.804 "strip_size_kb": 64, 00:10:06.804 "state": "configuring", 00:10:06.804 "raid_level": "concat", 00:10:06.804 "superblock": true, 00:10:06.804 "num_base_bdevs": 4, 00:10:06.804 "num_base_bdevs_discovered": 2, 00:10:06.804 "num_base_bdevs_operational": 4, 00:10:06.804 "base_bdevs_list": [ 00:10:06.804 { 00:10:06.804 "name": "BaseBdev1", 00:10:06.804 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:06.804 "is_configured": true, 00:10:06.804 "data_offset": 2048, 00:10:06.804 "data_size": 63488 00:10:06.804 }, 00:10:06.804 { 00:10:06.804 "name": "BaseBdev2", 00:10:06.804 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:06.804 "is_configured": true, 00:10:06.804 "data_offset": 2048, 00:10:06.804 "data_size": 63488 00:10:06.804 }, 00:10:06.804 { 00:10:06.804 "name": "BaseBdev3", 00:10:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.804 "is_configured": false, 00:10:06.804 "data_offset": 0, 00:10:06.804 "data_size": 0 00:10:06.804 }, 00:10:06.804 { 00:10:06.804 "name": "BaseBdev4", 00:10:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.804 "is_configured": false, 00:10:06.804 "data_offset": 0, 00:10:06.804 "data_size": 0 00:10:06.804 } 00:10:06.804 ] 00:10:06.804 }' 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.804 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.371 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.372 [2024-11-21 04:56:23.897482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.372 BaseBdev3 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.372 [ 00:10:07.372 { 00:10:07.372 "name": "BaseBdev3", 00:10:07.372 "aliases": [ 00:10:07.372 "1e59db3c-082f-4002-891b-ebbbf32be1ef" 00:10:07.372 ], 00:10:07.372 "product_name": "Malloc disk", 00:10:07.372 "block_size": 512, 00:10:07.372 "num_blocks": 65536, 00:10:07.372 "uuid": "1e59db3c-082f-4002-891b-ebbbf32be1ef", 00:10:07.372 "assigned_rate_limits": { 00:10:07.372 "rw_ios_per_sec": 0, 00:10:07.372 "rw_mbytes_per_sec": 0, 00:10:07.372 "r_mbytes_per_sec": 0, 00:10:07.372 "w_mbytes_per_sec": 0 00:10:07.372 }, 00:10:07.372 "claimed": true, 00:10:07.372 "claim_type": "exclusive_write", 00:10:07.372 "zoned": false, 00:10:07.372 "supported_io_types": { 00:10:07.372 "read": true, 00:10:07.372 "write": true, 00:10:07.372 "unmap": true, 00:10:07.372 "flush": true, 00:10:07.372 "reset": true, 00:10:07.372 "nvme_admin": false, 00:10:07.372 "nvme_io": false, 00:10:07.372 "nvme_io_md": false, 00:10:07.372 "write_zeroes": true, 00:10:07.372 "zcopy": true, 00:10:07.372 "get_zone_info": false, 00:10:07.372 "zone_management": false, 00:10:07.372 "zone_append": false, 00:10:07.372 "compare": false, 00:10:07.372 "compare_and_write": false, 00:10:07.372 "abort": true, 00:10:07.372 "seek_hole": false, 00:10:07.372 "seek_data": false, 00:10:07.372 "copy": true, 00:10:07.372 "nvme_iov_md": false 00:10:07.372 }, 00:10:07.372 "memory_domains": [ 00:10:07.372 { 00:10:07.372 "dma_device_id": "system", 00:10:07.372 "dma_device_type": 1 00:10:07.372 }, 00:10:07.372 { 00:10:07.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.372 "dma_device_type": 2 00:10:07.372 } 00:10:07.372 ], 00:10:07.372 "driver_specific": {} 00:10:07.372 } 00:10:07.372 ] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.372 "name": "Existed_Raid", 00:10:07.372 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:07.372 "strip_size_kb": 64, 00:10:07.372 "state": "configuring", 00:10:07.372 "raid_level": "concat", 00:10:07.372 "superblock": true, 00:10:07.372 "num_base_bdevs": 4, 00:10:07.372 "num_base_bdevs_discovered": 3, 00:10:07.372 "num_base_bdevs_operational": 4, 00:10:07.372 "base_bdevs_list": [ 00:10:07.372 { 00:10:07.372 "name": "BaseBdev1", 00:10:07.372 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:07.372 "is_configured": true, 00:10:07.372 "data_offset": 2048, 00:10:07.372 "data_size": 63488 00:10:07.372 }, 00:10:07.372 { 00:10:07.372 "name": "BaseBdev2", 00:10:07.372 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:07.372 "is_configured": true, 00:10:07.372 "data_offset": 2048, 00:10:07.372 "data_size": 63488 00:10:07.372 }, 00:10:07.372 { 00:10:07.372 "name": "BaseBdev3", 00:10:07.372 "uuid": "1e59db3c-082f-4002-891b-ebbbf32be1ef", 00:10:07.372 "is_configured": true, 00:10:07.372 "data_offset": 2048, 00:10:07.372 "data_size": 63488 00:10:07.372 }, 00:10:07.372 { 00:10:07.372 "name": "BaseBdev4", 00:10:07.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.372 "is_configured": false, 00:10:07.372 "data_offset": 0, 00:10:07.372 "data_size": 0 00:10:07.372 } 00:10:07.372 ] 00:10:07.372 }' 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.372 04:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.941 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.941 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.942 [2024-11-21 04:56:24.379885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.942 [2024-11-21 04:56:24.380116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:07.942 [2024-11-21 04:56:24.380142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.942 BaseBdev4 00:10:07.942 [2024-11-21 04:56:24.380425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:07.942 [2024-11-21 04:56:24.380610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:07.942 [2024-11-21 04:56:24.380624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:07.942 [2024-11-21 04:56:24.380746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.942 [ 00:10:07.942 { 00:10:07.942 "name": "BaseBdev4", 00:10:07.942 "aliases": [ 00:10:07.942 "708246b1-b0b7-4d74-b805-a2eb97ed1d6d" 00:10:07.942 ], 00:10:07.942 "product_name": "Malloc disk", 00:10:07.942 "block_size": 512, 00:10:07.942 "num_blocks": 65536, 00:10:07.942 "uuid": "708246b1-b0b7-4d74-b805-a2eb97ed1d6d", 00:10:07.942 "assigned_rate_limits": { 00:10:07.942 "rw_ios_per_sec": 0, 00:10:07.942 "rw_mbytes_per_sec": 0, 00:10:07.942 "r_mbytes_per_sec": 0, 00:10:07.942 "w_mbytes_per_sec": 0 00:10:07.942 }, 00:10:07.942 "claimed": true, 00:10:07.942 "claim_type": "exclusive_write", 00:10:07.942 "zoned": false, 00:10:07.942 "supported_io_types": { 00:10:07.942 "read": true, 00:10:07.942 "write": true, 00:10:07.942 "unmap": true, 00:10:07.942 "flush": true, 00:10:07.942 "reset": true, 00:10:07.942 "nvme_admin": false, 00:10:07.942 "nvme_io": false, 00:10:07.942 "nvme_io_md": false, 00:10:07.942 "write_zeroes": true, 00:10:07.942 "zcopy": true, 00:10:07.942 "get_zone_info": false, 00:10:07.942 "zone_management": false, 00:10:07.942 "zone_append": false, 00:10:07.942 "compare": false, 00:10:07.942 "compare_and_write": false, 00:10:07.942 "abort": true, 00:10:07.942 "seek_hole": false, 00:10:07.942 "seek_data": false, 00:10:07.942 "copy": true, 00:10:07.942 "nvme_iov_md": false 00:10:07.942 }, 00:10:07.942 "memory_domains": [ 00:10:07.942 { 00:10:07.942 "dma_device_id": "system", 00:10:07.942 "dma_device_type": 1 00:10:07.942 }, 00:10:07.942 { 00:10:07.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.942 "dma_device_type": 2 00:10:07.942 } 00:10:07.942 ], 00:10:07.942 "driver_specific": {} 00:10:07.942 } 00:10:07.942 ] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.942 "name": "Existed_Raid", 00:10:07.942 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:07.942 "strip_size_kb": 64, 00:10:07.942 "state": "online", 00:10:07.942 "raid_level": "concat", 00:10:07.942 "superblock": true, 00:10:07.942 "num_base_bdevs": 4, 00:10:07.942 "num_base_bdevs_discovered": 4, 00:10:07.942 "num_base_bdevs_operational": 4, 00:10:07.942 "base_bdevs_list": [ 00:10:07.942 { 00:10:07.942 "name": "BaseBdev1", 00:10:07.942 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:07.942 "is_configured": true, 00:10:07.942 "data_offset": 2048, 00:10:07.942 "data_size": 63488 00:10:07.942 }, 00:10:07.942 { 00:10:07.942 "name": "BaseBdev2", 00:10:07.942 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:07.942 "is_configured": true, 00:10:07.942 "data_offset": 2048, 00:10:07.942 "data_size": 63488 00:10:07.942 }, 00:10:07.942 { 00:10:07.942 "name": "BaseBdev3", 00:10:07.942 "uuid": "1e59db3c-082f-4002-891b-ebbbf32be1ef", 00:10:07.942 "is_configured": true, 00:10:07.942 "data_offset": 2048, 00:10:07.942 "data_size": 63488 00:10:07.942 }, 00:10:07.942 { 00:10:07.942 "name": "BaseBdev4", 00:10:07.942 "uuid": "708246b1-b0b7-4d74-b805-a2eb97ed1d6d", 00:10:07.942 "is_configured": true, 00:10:07.942 "data_offset": 2048, 00:10:07.942 "data_size": 63488 00:10:07.942 } 00:10:07.942 ] 00:10:07.942 }' 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.942 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.201 [2024-11-21 04:56:24.871507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.201 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.201 "name": "Existed_Raid", 00:10:08.201 "aliases": [ 00:10:08.201 "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae" 00:10:08.201 ], 00:10:08.201 "product_name": "Raid Volume", 00:10:08.201 "block_size": 512, 00:10:08.201 "num_blocks": 253952, 00:10:08.201 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:08.201 "assigned_rate_limits": { 00:10:08.201 "rw_ios_per_sec": 0, 00:10:08.201 "rw_mbytes_per_sec": 0, 00:10:08.201 "r_mbytes_per_sec": 0, 00:10:08.201 "w_mbytes_per_sec": 0 00:10:08.201 }, 00:10:08.201 "claimed": false, 00:10:08.201 "zoned": false, 00:10:08.201 "supported_io_types": { 00:10:08.201 "read": true, 00:10:08.201 "write": true, 00:10:08.201 "unmap": true, 00:10:08.201 "flush": true, 00:10:08.201 "reset": true, 00:10:08.201 "nvme_admin": false, 00:10:08.201 "nvme_io": false, 00:10:08.201 "nvme_io_md": false, 00:10:08.201 "write_zeroes": true, 00:10:08.201 "zcopy": false, 00:10:08.201 "get_zone_info": false, 00:10:08.201 "zone_management": false, 00:10:08.201 "zone_append": false, 00:10:08.201 "compare": false, 00:10:08.201 "compare_and_write": false, 00:10:08.201 "abort": false, 00:10:08.201 "seek_hole": false, 00:10:08.201 "seek_data": false, 00:10:08.201 "copy": false, 00:10:08.201 "nvme_iov_md": false 00:10:08.201 }, 00:10:08.201 "memory_domains": [ 00:10:08.201 { 00:10:08.201 "dma_device_id": "system", 00:10:08.201 "dma_device_type": 1 00:10:08.201 }, 00:10:08.201 { 00:10:08.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.201 "dma_device_type": 2 00:10:08.201 }, 00:10:08.201 { 00:10:08.202 "dma_device_id": "system", 00:10:08.202 "dma_device_type": 1 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.202 "dma_device_type": 2 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "dma_device_id": "system", 00:10:08.202 "dma_device_type": 1 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.202 "dma_device_type": 2 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "dma_device_id": "system", 00:10:08.202 "dma_device_type": 1 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.202 "dma_device_type": 2 00:10:08.202 } 00:10:08.202 ], 00:10:08.202 "driver_specific": { 00:10:08.202 "raid": { 00:10:08.202 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:08.202 "strip_size_kb": 64, 00:10:08.202 "state": "online", 00:10:08.202 "raid_level": "concat", 00:10:08.202 "superblock": true, 00:10:08.202 "num_base_bdevs": 4, 00:10:08.202 "num_base_bdevs_discovered": 4, 00:10:08.202 "num_base_bdevs_operational": 4, 00:10:08.202 "base_bdevs_list": [ 00:10:08.202 { 00:10:08.202 "name": "BaseBdev1", 00:10:08.202 "uuid": "9af855fb-3e44-4ad8-9465-bcc6c4148281", 00:10:08.202 "is_configured": true, 00:10:08.202 "data_offset": 2048, 00:10:08.202 "data_size": 63488 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "name": "BaseBdev2", 00:10:08.202 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:08.202 "is_configured": true, 00:10:08.202 "data_offset": 2048, 00:10:08.202 "data_size": 63488 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "name": "BaseBdev3", 00:10:08.202 "uuid": "1e59db3c-082f-4002-891b-ebbbf32be1ef", 00:10:08.202 "is_configured": true, 00:10:08.202 "data_offset": 2048, 00:10:08.202 "data_size": 63488 00:10:08.202 }, 00:10:08.202 { 00:10:08.202 "name": "BaseBdev4", 00:10:08.202 "uuid": "708246b1-b0b7-4d74-b805-a2eb97ed1d6d", 00:10:08.202 "is_configured": true, 00:10:08.202 "data_offset": 2048, 00:10:08.202 "data_size": 63488 00:10:08.202 } 00:10:08.202 ] 00:10:08.202 } 00:10:08.202 } 00:10:08.202 }' 00:10:08.202 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.461 BaseBdev2 00:10:08.461 BaseBdev3 00:10:08.461 BaseBdev4' 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 04:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 [2024-11-21 04:56:25.186630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.461 [2024-11-21 04:56:25.186661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.461 [2024-11-21 04:56:25.186736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.721 "name": "Existed_Raid", 00:10:08.721 "uuid": "c98a1d3a-77c1-4f5b-a1b6-5d7864d375ae", 00:10:08.721 "strip_size_kb": 64, 00:10:08.721 "state": "offline", 00:10:08.721 "raid_level": "concat", 00:10:08.721 "superblock": true, 00:10:08.721 "num_base_bdevs": 4, 00:10:08.721 "num_base_bdevs_discovered": 3, 00:10:08.721 "num_base_bdevs_operational": 3, 00:10:08.721 "base_bdevs_list": [ 00:10:08.721 { 00:10:08.721 "name": null, 00:10:08.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.721 "is_configured": false, 00:10:08.721 "data_offset": 0, 00:10:08.721 "data_size": 63488 00:10:08.721 }, 00:10:08.721 { 00:10:08.721 "name": "BaseBdev2", 00:10:08.721 "uuid": "9d030df7-f11f-44a1-8290-c92ac8d7d5eb", 00:10:08.721 "is_configured": true, 00:10:08.721 "data_offset": 2048, 00:10:08.721 "data_size": 63488 00:10:08.721 }, 00:10:08.721 { 00:10:08.721 "name": "BaseBdev3", 00:10:08.721 "uuid": "1e59db3c-082f-4002-891b-ebbbf32be1ef", 00:10:08.721 "is_configured": true, 00:10:08.721 "data_offset": 2048, 00:10:08.721 "data_size": 63488 00:10:08.721 }, 00:10:08.721 { 00:10:08.721 "name": "BaseBdev4", 00:10:08.721 "uuid": "708246b1-b0b7-4d74-b805-a2eb97ed1d6d", 00:10:08.721 "is_configured": true, 00:10:08.721 "data_offset": 2048, 00:10:08.721 "data_size": 63488 00:10:08.721 } 00:10:08.721 ] 00:10:08.721 }' 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.721 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.981 [2024-11-21 04:56:25.688982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.981 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 [2024-11-21 04:56:25.760031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 [2024-11-21 04:56:25.830848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:09.241 [2024-11-21 04:56:25.830895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 BaseBdev2 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.241 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 [ 00:10:09.242 { 00:10:09.242 "name": "BaseBdev2", 00:10:09.242 "aliases": [ 00:10:09.242 "8953ed01-920e-467c-b0de-52de3dd81bf4" 00:10:09.242 ], 00:10:09.242 "product_name": "Malloc disk", 00:10:09.242 "block_size": 512, 00:10:09.242 "num_blocks": 65536, 00:10:09.242 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:09.242 "assigned_rate_limits": { 00:10:09.242 "rw_ios_per_sec": 0, 00:10:09.242 "rw_mbytes_per_sec": 0, 00:10:09.242 "r_mbytes_per_sec": 0, 00:10:09.242 "w_mbytes_per_sec": 0 00:10:09.242 }, 00:10:09.242 "claimed": false, 00:10:09.242 "zoned": false, 00:10:09.242 "supported_io_types": { 00:10:09.242 "read": true, 00:10:09.242 "write": true, 00:10:09.242 "unmap": true, 00:10:09.242 "flush": true, 00:10:09.242 "reset": true, 00:10:09.242 "nvme_admin": false, 00:10:09.242 "nvme_io": false, 00:10:09.242 "nvme_io_md": false, 00:10:09.242 "write_zeroes": true, 00:10:09.242 "zcopy": true, 00:10:09.242 "get_zone_info": false, 00:10:09.242 "zone_management": false, 00:10:09.242 "zone_append": false, 00:10:09.242 "compare": false, 00:10:09.242 "compare_and_write": false, 00:10:09.242 "abort": true, 00:10:09.242 "seek_hole": false, 00:10:09.242 "seek_data": false, 00:10:09.242 "copy": true, 00:10:09.242 "nvme_iov_md": false 00:10:09.242 }, 00:10:09.242 "memory_domains": [ 00:10:09.242 { 00:10:09.242 "dma_device_id": "system", 00:10:09.242 "dma_device_type": 1 00:10:09.242 }, 00:10:09.242 { 00:10:09.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.242 "dma_device_type": 2 00:10:09.242 } 00:10:09.242 ], 00:10:09.242 "driver_specific": {} 00:10:09.242 } 00:10:09.242 ] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 BaseBdev3 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.242 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.502 [ 00:10:09.502 { 00:10:09.502 "name": "BaseBdev3", 00:10:09.502 "aliases": [ 00:10:09.502 "cd767c34-6d96-42f7-90af-37cc30e7f858" 00:10:09.502 ], 00:10:09.502 "product_name": "Malloc disk", 00:10:09.502 "block_size": 512, 00:10:09.502 "num_blocks": 65536, 00:10:09.502 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:09.502 "assigned_rate_limits": { 00:10:09.502 "rw_ios_per_sec": 0, 00:10:09.502 "rw_mbytes_per_sec": 0, 00:10:09.502 "r_mbytes_per_sec": 0, 00:10:09.502 "w_mbytes_per_sec": 0 00:10:09.502 }, 00:10:09.502 "claimed": false, 00:10:09.502 "zoned": false, 00:10:09.502 "supported_io_types": { 00:10:09.502 "read": true, 00:10:09.502 "write": true, 00:10:09.502 "unmap": true, 00:10:09.502 "flush": true, 00:10:09.502 "reset": true, 00:10:09.502 "nvme_admin": false, 00:10:09.502 "nvme_io": false, 00:10:09.502 "nvme_io_md": false, 00:10:09.502 "write_zeroes": true, 00:10:09.502 "zcopy": true, 00:10:09.502 "get_zone_info": false, 00:10:09.502 "zone_management": false, 00:10:09.502 "zone_append": false, 00:10:09.502 "compare": false, 00:10:09.502 "compare_and_write": false, 00:10:09.502 "abort": true, 00:10:09.502 "seek_hole": false, 00:10:09.502 "seek_data": false, 00:10:09.502 "copy": true, 00:10:09.502 "nvme_iov_md": false 00:10:09.502 }, 00:10:09.502 "memory_domains": [ 00:10:09.502 { 00:10:09.502 "dma_device_id": "system", 00:10:09.502 "dma_device_type": 1 00:10:09.502 }, 00:10:09.502 { 00:10:09.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.502 "dma_device_type": 2 00:10:09.502 } 00:10:09.502 ], 00:10:09.502 "driver_specific": {} 00:10:09.502 } 00:10:09.502 ] 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.502 04:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.502 BaseBdev4 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.502 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.502 [ 00:10:09.502 { 00:10:09.502 "name": "BaseBdev4", 00:10:09.502 "aliases": [ 00:10:09.502 "9cff7014-9c72-47c0-9f37-6b89acb57321" 00:10:09.502 ], 00:10:09.502 "product_name": "Malloc disk", 00:10:09.502 "block_size": 512, 00:10:09.502 "num_blocks": 65536, 00:10:09.502 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:09.502 "assigned_rate_limits": { 00:10:09.502 "rw_ios_per_sec": 0, 00:10:09.502 "rw_mbytes_per_sec": 0, 00:10:09.502 "r_mbytes_per_sec": 0, 00:10:09.503 "w_mbytes_per_sec": 0 00:10:09.503 }, 00:10:09.503 "claimed": false, 00:10:09.503 "zoned": false, 00:10:09.503 "supported_io_types": { 00:10:09.503 "read": true, 00:10:09.503 "write": true, 00:10:09.503 "unmap": true, 00:10:09.503 "flush": true, 00:10:09.503 "reset": true, 00:10:09.503 "nvme_admin": false, 00:10:09.503 "nvme_io": false, 00:10:09.503 "nvme_io_md": false, 00:10:09.503 "write_zeroes": true, 00:10:09.503 "zcopy": true, 00:10:09.503 "get_zone_info": false, 00:10:09.503 "zone_management": false, 00:10:09.503 "zone_append": false, 00:10:09.503 "compare": false, 00:10:09.503 "compare_and_write": false, 00:10:09.503 "abort": true, 00:10:09.503 "seek_hole": false, 00:10:09.503 "seek_data": false, 00:10:09.503 "copy": true, 00:10:09.503 "nvme_iov_md": false 00:10:09.503 }, 00:10:09.503 "memory_domains": [ 00:10:09.503 { 00:10:09.503 "dma_device_id": "system", 00:10:09.503 "dma_device_type": 1 00:10:09.503 }, 00:10:09.503 { 00:10:09.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.503 "dma_device_type": 2 00:10:09.503 } 00:10:09.503 ], 00:10:09.503 "driver_specific": {} 00:10:09.503 } 00:10:09.503 ] 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.503 [2024-11-21 04:56:26.046810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.503 [2024-11-21 04:56:26.046901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.503 [2024-11-21 04:56:26.046940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.503 [2024-11-21 04:56:26.048764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.503 [2024-11-21 04:56:26.048848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.503 "name": "Existed_Raid", 00:10:09.503 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:09.503 "strip_size_kb": 64, 00:10:09.503 "state": "configuring", 00:10:09.503 "raid_level": "concat", 00:10:09.503 "superblock": true, 00:10:09.503 "num_base_bdevs": 4, 00:10:09.503 "num_base_bdevs_discovered": 3, 00:10:09.503 "num_base_bdevs_operational": 4, 00:10:09.503 "base_bdevs_list": [ 00:10:09.503 { 00:10:09.503 "name": "BaseBdev1", 00:10:09.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.503 "is_configured": false, 00:10:09.503 "data_offset": 0, 00:10:09.503 "data_size": 0 00:10:09.503 }, 00:10:09.503 { 00:10:09.503 "name": "BaseBdev2", 00:10:09.503 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:09.503 "is_configured": true, 00:10:09.503 "data_offset": 2048, 00:10:09.503 "data_size": 63488 00:10:09.503 }, 00:10:09.503 { 00:10:09.503 "name": "BaseBdev3", 00:10:09.503 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:09.503 "is_configured": true, 00:10:09.503 "data_offset": 2048, 00:10:09.503 "data_size": 63488 00:10:09.503 }, 00:10:09.503 { 00:10:09.503 "name": "BaseBdev4", 00:10:09.503 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:09.503 "is_configured": true, 00:10:09.503 "data_offset": 2048, 00:10:09.503 "data_size": 63488 00:10:09.503 } 00:10:09.503 ] 00:10:09.503 }' 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.503 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.762 [2024-11-21 04:56:26.454137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.762 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.763 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.022 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.022 "name": "Existed_Raid", 00:10:10.022 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:10.022 "strip_size_kb": 64, 00:10:10.022 "state": "configuring", 00:10:10.022 "raid_level": "concat", 00:10:10.022 "superblock": true, 00:10:10.022 "num_base_bdevs": 4, 00:10:10.022 "num_base_bdevs_discovered": 2, 00:10:10.022 "num_base_bdevs_operational": 4, 00:10:10.022 "base_bdevs_list": [ 00:10:10.022 { 00:10:10.022 "name": "BaseBdev1", 00:10:10.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.022 "is_configured": false, 00:10:10.022 "data_offset": 0, 00:10:10.022 "data_size": 0 00:10:10.022 }, 00:10:10.022 { 00:10:10.022 "name": null, 00:10:10.022 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:10.022 "is_configured": false, 00:10:10.022 "data_offset": 0, 00:10:10.022 "data_size": 63488 00:10:10.022 }, 00:10:10.022 { 00:10:10.022 "name": "BaseBdev3", 00:10:10.022 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:10.022 "is_configured": true, 00:10:10.022 "data_offset": 2048, 00:10:10.022 "data_size": 63488 00:10:10.022 }, 00:10:10.022 { 00:10:10.022 "name": "BaseBdev4", 00:10:10.022 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:10.022 "is_configured": true, 00:10:10.022 "data_offset": 2048, 00:10:10.022 "data_size": 63488 00:10:10.022 } 00:10:10.022 ] 00:10:10.022 }' 00:10:10.022 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.022 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 [2024-11-21 04:56:26.948108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.282 BaseBdev1 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.282 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.283 [ 00:10:10.283 { 00:10:10.283 "name": "BaseBdev1", 00:10:10.283 "aliases": [ 00:10:10.283 "532c78a5-e6b4-4998-b115-2445c9707fca" 00:10:10.283 ], 00:10:10.283 "product_name": "Malloc disk", 00:10:10.283 "block_size": 512, 00:10:10.283 "num_blocks": 65536, 00:10:10.283 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:10.283 "assigned_rate_limits": { 00:10:10.283 "rw_ios_per_sec": 0, 00:10:10.283 "rw_mbytes_per_sec": 0, 00:10:10.283 "r_mbytes_per_sec": 0, 00:10:10.283 "w_mbytes_per_sec": 0 00:10:10.283 }, 00:10:10.283 "claimed": true, 00:10:10.283 "claim_type": "exclusive_write", 00:10:10.283 "zoned": false, 00:10:10.283 "supported_io_types": { 00:10:10.283 "read": true, 00:10:10.283 "write": true, 00:10:10.283 "unmap": true, 00:10:10.283 "flush": true, 00:10:10.283 "reset": true, 00:10:10.283 "nvme_admin": false, 00:10:10.283 "nvme_io": false, 00:10:10.283 "nvme_io_md": false, 00:10:10.283 "write_zeroes": true, 00:10:10.283 "zcopy": true, 00:10:10.283 "get_zone_info": false, 00:10:10.283 "zone_management": false, 00:10:10.283 "zone_append": false, 00:10:10.283 "compare": false, 00:10:10.283 "compare_and_write": false, 00:10:10.283 "abort": true, 00:10:10.283 "seek_hole": false, 00:10:10.283 "seek_data": false, 00:10:10.283 "copy": true, 00:10:10.283 "nvme_iov_md": false 00:10:10.283 }, 00:10:10.283 "memory_domains": [ 00:10:10.283 { 00:10:10.283 "dma_device_id": "system", 00:10:10.283 "dma_device_type": 1 00:10:10.283 }, 00:10:10.283 { 00:10:10.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.283 "dma_device_type": 2 00:10:10.283 } 00:10:10.283 ], 00:10:10.283 "driver_specific": {} 00:10:10.283 } 00:10:10.283 ] 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.283 04:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.283 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.542 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.542 "name": "Existed_Raid", 00:10:10.542 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:10.542 "strip_size_kb": 64, 00:10:10.542 "state": "configuring", 00:10:10.542 "raid_level": "concat", 00:10:10.542 "superblock": true, 00:10:10.542 "num_base_bdevs": 4, 00:10:10.542 "num_base_bdevs_discovered": 3, 00:10:10.542 "num_base_bdevs_operational": 4, 00:10:10.542 "base_bdevs_list": [ 00:10:10.542 { 00:10:10.542 "name": "BaseBdev1", 00:10:10.542 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:10.542 "is_configured": true, 00:10:10.542 "data_offset": 2048, 00:10:10.542 "data_size": 63488 00:10:10.542 }, 00:10:10.542 { 00:10:10.542 "name": null, 00:10:10.542 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:10.542 "is_configured": false, 00:10:10.542 "data_offset": 0, 00:10:10.542 "data_size": 63488 00:10:10.542 }, 00:10:10.542 { 00:10:10.542 "name": "BaseBdev3", 00:10:10.542 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:10.542 "is_configured": true, 00:10:10.542 "data_offset": 2048, 00:10:10.542 "data_size": 63488 00:10:10.542 }, 00:10:10.542 { 00:10:10.542 "name": "BaseBdev4", 00:10:10.542 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:10.542 "is_configured": true, 00:10:10.542 "data_offset": 2048, 00:10:10.542 "data_size": 63488 00:10:10.542 } 00:10:10.542 ] 00:10:10.542 }' 00:10:10.542 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.542 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 [2024-11-21 04:56:27.455294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.802 "name": "Existed_Raid", 00:10:10.802 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:10.802 "strip_size_kb": 64, 00:10:10.802 "state": "configuring", 00:10:10.802 "raid_level": "concat", 00:10:10.802 "superblock": true, 00:10:10.802 "num_base_bdevs": 4, 00:10:10.802 "num_base_bdevs_discovered": 2, 00:10:10.802 "num_base_bdevs_operational": 4, 00:10:10.802 "base_bdevs_list": [ 00:10:10.802 { 00:10:10.802 "name": "BaseBdev1", 00:10:10.802 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:10.802 "is_configured": true, 00:10:10.802 "data_offset": 2048, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": null, 00:10:10.802 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:10.802 "is_configured": false, 00:10:10.802 "data_offset": 0, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": null, 00:10:10.802 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:10.802 "is_configured": false, 00:10:10.802 "data_offset": 0, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": "BaseBdev4", 00:10:10.802 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:10.802 "is_configured": true, 00:10:10.802 "data_offset": 2048, 00:10:10.802 "data_size": 63488 00:10:10.802 } 00:10:10.802 ] 00:10:10.802 }' 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.802 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 [2024-11-21 04:56:27.990409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.371 04:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.371 "name": "Existed_Raid", 00:10:11.371 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:11.371 "strip_size_kb": 64, 00:10:11.371 "state": "configuring", 00:10:11.371 "raid_level": "concat", 00:10:11.371 "superblock": true, 00:10:11.371 "num_base_bdevs": 4, 00:10:11.371 "num_base_bdevs_discovered": 3, 00:10:11.371 "num_base_bdevs_operational": 4, 00:10:11.371 "base_bdevs_list": [ 00:10:11.371 { 00:10:11.371 "name": "BaseBdev1", 00:10:11.371 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": null, 00:10:11.371 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:11.371 "is_configured": false, 00:10:11.371 "data_offset": 0, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": "BaseBdev3", 00:10:11.371 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": "BaseBdev4", 00:10:11.371 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 } 00:10:11.371 ] 00:10:11.371 }' 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.371 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 [2024-11-21 04:56:28.501555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.942 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.943 "name": "Existed_Raid", 00:10:11.943 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:11.943 "strip_size_kb": 64, 00:10:11.943 "state": "configuring", 00:10:11.943 "raid_level": "concat", 00:10:11.943 "superblock": true, 00:10:11.943 "num_base_bdevs": 4, 00:10:11.943 "num_base_bdevs_discovered": 2, 00:10:11.943 "num_base_bdevs_operational": 4, 00:10:11.943 "base_bdevs_list": [ 00:10:11.943 { 00:10:11.943 "name": null, 00:10:11.943 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:11.943 "is_configured": false, 00:10:11.943 "data_offset": 0, 00:10:11.943 "data_size": 63488 00:10:11.943 }, 00:10:11.943 { 00:10:11.943 "name": null, 00:10:11.943 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:11.943 "is_configured": false, 00:10:11.943 "data_offset": 0, 00:10:11.943 "data_size": 63488 00:10:11.943 }, 00:10:11.943 { 00:10:11.943 "name": "BaseBdev3", 00:10:11.943 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:11.943 "is_configured": true, 00:10:11.943 "data_offset": 2048, 00:10:11.943 "data_size": 63488 00:10:11.943 }, 00:10:11.943 { 00:10:11.943 "name": "BaseBdev4", 00:10:11.943 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:11.943 "is_configured": true, 00:10:11.943 "data_offset": 2048, 00:10:11.943 "data_size": 63488 00:10:11.943 } 00:10:11.943 ] 00:10:11.943 }' 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.943 04:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.512 [2024-11-21 04:56:29.059294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.512 "name": "Existed_Raid", 00:10:12.512 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:12.512 "strip_size_kb": 64, 00:10:12.512 "state": "configuring", 00:10:12.512 "raid_level": "concat", 00:10:12.512 "superblock": true, 00:10:12.512 "num_base_bdevs": 4, 00:10:12.512 "num_base_bdevs_discovered": 3, 00:10:12.512 "num_base_bdevs_operational": 4, 00:10:12.512 "base_bdevs_list": [ 00:10:12.512 { 00:10:12.512 "name": null, 00:10:12.512 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:12.512 "is_configured": false, 00:10:12.512 "data_offset": 0, 00:10:12.512 "data_size": 63488 00:10:12.512 }, 00:10:12.512 { 00:10:12.512 "name": "BaseBdev2", 00:10:12.512 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:12.512 "is_configured": true, 00:10:12.512 "data_offset": 2048, 00:10:12.512 "data_size": 63488 00:10:12.512 }, 00:10:12.512 { 00:10:12.512 "name": "BaseBdev3", 00:10:12.512 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:12.512 "is_configured": true, 00:10:12.512 "data_offset": 2048, 00:10:12.512 "data_size": 63488 00:10:12.512 }, 00:10:12.512 { 00:10:12.512 "name": "BaseBdev4", 00:10:12.512 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:12.512 "is_configured": true, 00:10:12.512 "data_offset": 2048, 00:10:12.512 "data_size": 63488 00:10:12.512 } 00:10:12.512 ] 00:10:12.512 }' 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.512 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.770 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.770 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 532c78a5-e6b4-4998-b115-2445c9707fca 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 [2024-11-21 04:56:29.581434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.030 [2024-11-21 04:56:29.581760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:13.030 [2024-11-21 04:56:29.581820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.030 NewBaseBdev 00:10:13.030 [2024-11-21 04:56:29.582151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.030 [2024-11-21 04:56:29.582275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:13.030 [2024-11-21 04:56:29.582337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:13.030 [2024-11-21 04:56:29.582506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 [ 00:10:13.030 { 00:10:13.030 "name": "NewBaseBdev", 00:10:13.030 "aliases": [ 00:10:13.030 "532c78a5-e6b4-4998-b115-2445c9707fca" 00:10:13.030 ], 00:10:13.030 "product_name": "Malloc disk", 00:10:13.030 "block_size": 512, 00:10:13.030 "num_blocks": 65536, 00:10:13.030 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:13.030 "assigned_rate_limits": { 00:10:13.030 "rw_ios_per_sec": 0, 00:10:13.030 "rw_mbytes_per_sec": 0, 00:10:13.031 "r_mbytes_per_sec": 0, 00:10:13.031 "w_mbytes_per_sec": 0 00:10:13.031 }, 00:10:13.031 "claimed": true, 00:10:13.031 "claim_type": "exclusive_write", 00:10:13.031 "zoned": false, 00:10:13.031 "supported_io_types": { 00:10:13.031 "read": true, 00:10:13.031 "write": true, 00:10:13.031 "unmap": true, 00:10:13.031 "flush": true, 00:10:13.031 "reset": true, 00:10:13.031 "nvme_admin": false, 00:10:13.031 "nvme_io": false, 00:10:13.031 "nvme_io_md": false, 00:10:13.031 "write_zeroes": true, 00:10:13.031 "zcopy": true, 00:10:13.031 "get_zone_info": false, 00:10:13.031 "zone_management": false, 00:10:13.031 "zone_append": false, 00:10:13.031 "compare": false, 00:10:13.031 "compare_and_write": false, 00:10:13.031 "abort": true, 00:10:13.031 "seek_hole": false, 00:10:13.031 "seek_data": false, 00:10:13.031 "copy": true, 00:10:13.031 "nvme_iov_md": false 00:10:13.031 }, 00:10:13.031 "memory_domains": [ 00:10:13.031 { 00:10:13.031 "dma_device_id": "system", 00:10:13.031 "dma_device_type": 1 00:10:13.031 }, 00:10:13.031 { 00:10:13.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.031 "dma_device_type": 2 00:10:13.031 } 00:10:13.031 ], 00:10:13.031 "driver_specific": {} 00:10:13.031 } 00:10:13.031 ] 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.031 "name": "Existed_Raid", 00:10:13.031 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:13.031 "strip_size_kb": 64, 00:10:13.031 "state": "online", 00:10:13.031 "raid_level": "concat", 00:10:13.031 "superblock": true, 00:10:13.031 "num_base_bdevs": 4, 00:10:13.031 "num_base_bdevs_discovered": 4, 00:10:13.031 "num_base_bdevs_operational": 4, 00:10:13.031 "base_bdevs_list": [ 00:10:13.031 { 00:10:13.031 "name": "NewBaseBdev", 00:10:13.031 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:13.031 "is_configured": true, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 }, 00:10:13.031 { 00:10:13.031 "name": "BaseBdev2", 00:10:13.031 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:13.031 "is_configured": true, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 }, 00:10:13.031 { 00:10:13.031 "name": "BaseBdev3", 00:10:13.031 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:13.031 "is_configured": true, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 }, 00:10:13.031 { 00:10:13.031 "name": "BaseBdev4", 00:10:13.031 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:13.031 "is_configured": true, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 } 00:10:13.031 ] 00:10:13.031 }' 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.031 04:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 [2024-11-21 04:56:30.065003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.600 "name": "Existed_Raid", 00:10:13.600 "aliases": [ 00:10:13.600 "12b250ed-f128-4c8e-b902-f30c34cf8a5f" 00:10:13.600 ], 00:10:13.600 "product_name": "Raid Volume", 00:10:13.600 "block_size": 512, 00:10:13.600 "num_blocks": 253952, 00:10:13.600 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:13.600 "assigned_rate_limits": { 00:10:13.600 "rw_ios_per_sec": 0, 00:10:13.600 "rw_mbytes_per_sec": 0, 00:10:13.600 "r_mbytes_per_sec": 0, 00:10:13.600 "w_mbytes_per_sec": 0 00:10:13.600 }, 00:10:13.600 "claimed": false, 00:10:13.600 "zoned": false, 00:10:13.600 "supported_io_types": { 00:10:13.600 "read": true, 00:10:13.600 "write": true, 00:10:13.600 "unmap": true, 00:10:13.600 "flush": true, 00:10:13.600 "reset": true, 00:10:13.600 "nvme_admin": false, 00:10:13.600 "nvme_io": false, 00:10:13.600 "nvme_io_md": false, 00:10:13.600 "write_zeroes": true, 00:10:13.600 "zcopy": false, 00:10:13.600 "get_zone_info": false, 00:10:13.600 "zone_management": false, 00:10:13.600 "zone_append": false, 00:10:13.600 "compare": false, 00:10:13.600 "compare_and_write": false, 00:10:13.600 "abort": false, 00:10:13.600 "seek_hole": false, 00:10:13.600 "seek_data": false, 00:10:13.600 "copy": false, 00:10:13.600 "nvme_iov_md": false 00:10:13.600 }, 00:10:13.600 "memory_domains": [ 00:10:13.600 { 00:10:13.600 "dma_device_id": "system", 00:10:13.600 "dma_device_type": 1 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.600 "dma_device_type": 2 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "system", 00:10:13.600 "dma_device_type": 1 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.600 "dma_device_type": 2 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "system", 00:10:13.600 "dma_device_type": 1 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.600 "dma_device_type": 2 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "system", 00:10:13.600 "dma_device_type": 1 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.600 "dma_device_type": 2 00:10:13.600 } 00:10:13.600 ], 00:10:13.600 "driver_specific": { 00:10:13.600 "raid": { 00:10:13.600 "uuid": "12b250ed-f128-4c8e-b902-f30c34cf8a5f", 00:10:13.600 "strip_size_kb": 64, 00:10:13.600 "state": "online", 00:10:13.600 "raid_level": "concat", 00:10:13.600 "superblock": true, 00:10:13.600 "num_base_bdevs": 4, 00:10:13.600 "num_base_bdevs_discovered": 4, 00:10:13.600 "num_base_bdevs_operational": 4, 00:10:13.600 "base_bdevs_list": [ 00:10:13.600 { 00:10:13.600 "name": "NewBaseBdev", 00:10:13.600 "uuid": "532c78a5-e6b4-4998-b115-2445c9707fca", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "name": "BaseBdev2", 00:10:13.600 "uuid": "8953ed01-920e-467c-b0de-52de3dd81bf4", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "name": "BaseBdev3", 00:10:13.600 "uuid": "cd767c34-6d96-42f7-90af-37cc30e7f858", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "name": "BaseBdev4", 00:10:13.600 "uuid": "9cff7014-9c72-47c0-9f37-6b89acb57321", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 } 00:10:13.600 ] 00:10:13.600 } 00:10:13.600 } 00:10:13.600 }' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.600 BaseBdev2 00:10:13.600 BaseBdev3 00:10:13.600 BaseBdev4' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.600 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.601 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.860 [2024-11-21 04:56:30.360142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.860 [2024-11-21 04:56:30.360171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.860 [2024-11-21 04:56:30.360240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.860 [2024-11-21 04:56:30.360305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.860 [2024-11-21 04:56:30.360322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82975 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82975 ']' 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82975 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82975 00:10:13.860 killing process with pid 82975 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.860 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.861 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82975' 00:10:13.861 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82975 00:10:13.861 [2024-11-21 04:56:30.410068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.861 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82975 00:10:13.861 [2024-11-21 04:56:30.449861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.120 04:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.120 00:10:14.120 real 0m9.617s 00:10:14.120 user 0m16.492s 00:10:14.120 sys 0m2.017s 00:10:14.120 ************************************ 00:10:14.120 END TEST raid_state_function_test_sb 00:10:14.120 ************************************ 00:10:14.120 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.120 04:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 04:56:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:14.120 04:56:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.120 04:56:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.120 04:56:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 ************************************ 00:10:14.120 START TEST raid_superblock_test 00:10:14.120 ************************************ 00:10:14.120 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:14.120 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:14.120 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83623 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83623 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83623 ']' 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.121 04:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.121 [2024-11-21 04:56:30.824845] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:14.121 [2024-11-21 04:56:30.825067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83623 ] 00:10:14.380 [2024-11-21 04:56:30.998864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.380 [2024-11-21 04:56:31.024429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.380 [2024-11-21 04:56:31.066394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.380 [2024-11-21 04:56:31.066508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.948 malloc1 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.948 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 [2024-11-21 04:56:31.680633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.949 [2024-11-21 04:56:31.680703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.949 [2024-11-21 04:56:31.680728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:14.949 [2024-11-21 04:56:31.680743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.208 [2024-11-21 04:56:31.683000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.208 [2024-11-21 04:56:31.683106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.208 pt1 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.208 malloc2 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.208 [2024-11-21 04:56:31.709256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.208 [2024-11-21 04:56:31.709351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.208 [2024-11-21 04:56:31.709385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:15.208 [2024-11-21 04:56:31.709416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.208 [2024-11-21 04:56:31.711631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.208 [2024-11-21 04:56:31.711703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.208 pt2 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.208 malloc3 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.208 [2024-11-21 04:56:31.741707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.208 [2024-11-21 04:56:31.741793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.208 [2024-11-21 04:56:31.741827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:15.208 [2024-11-21 04:56:31.741856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.208 [2024-11-21 04:56:31.743954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.208 [2024-11-21 04:56:31.744026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.208 pt3 00:10:15.208 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 malloc4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 [2024-11-21 04:56:31.780330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:15.209 [2024-11-21 04:56:31.780424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.209 [2024-11-21 04:56:31.780458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:15.209 [2024-11-21 04:56:31.780495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.209 [2024-11-21 04:56:31.782580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.209 [2024-11-21 04:56:31.782651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:15.209 pt4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 [2024-11-21 04:56:31.792389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.209 [2024-11-21 04:56:31.794187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.209 [2024-11-21 04:56:31.794246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.209 [2024-11-21 04:56:31.794306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:15.209 [2024-11-21 04:56:31.794463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:15.209 [2024-11-21 04:56:31.794477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.209 [2024-11-21 04:56:31.794764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:15.209 [2024-11-21 04:56:31.794937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:15.209 [2024-11-21 04:56:31.794953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:15.209 [2024-11-21 04:56:31.795079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.209 "name": "raid_bdev1", 00:10:15.209 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:15.209 "strip_size_kb": 64, 00:10:15.209 "state": "online", 00:10:15.209 "raid_level": "concat", 00:10:15.209 "superblock": true, 00:10:15.209 "num_base_bdevs": 4, 00:10:15.209 "num_base_bdevs_discovered": 4, 00:10:15.209 "num_base_bdevs_operational": 4, 00:10:15.209 "base_bdevs_list": [ 00:10:15.209 { 00:10:15.209 "name": "pt1", 00:10:15.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "pt2", 00:10:15.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "pt3", 00:10:15.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "pt4", 00:10:15.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 } 00:10:15.209 ] 00:10:15.209 }' 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.209 04:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.468 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.729 [2024-11-21 04:56:32.212005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.729 "name": "raid_bdev1", 00:10:15.729 "aliases": [ 00:10:15.729 "92bd65a4-2d71-452a-be2a-3ce78c37c173" 00:10:15.729 ], 00:10:15.729 "product_name": "Raid Volume", 00:10:15.729 "block_size": 512, 00:10:15.729 "num_blocks": 253952, 00:10:15.729 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:15.729 "assigned_rate_limits": { 00:10:15.729 "rw_ios_per_sec": 0, 00:10:15.729 "rw_mbytes_per_sec": 0, 00:10:15.729 "r_mbytes_per_sec": 0, 00:10:15.729 "w_mbytes_per_sec": 0 00:10:15.729 }, 00:10:15.729 "claimed": false, 00:10:15.729 "zoned": false, 00:10:15.729 "supported_io_types": { 00:10:15.729 "read": true, 00:10:15.729 "write": true, 00:10:15.729 "unmap": true, 00:10:15.729 "flush": true, 00:10:15.729 "reset": true, 00:10:15.729 "nvme_admin": false, 00:10:15.729 "nvme_io": false, 00:10:15.729 "nvme_io_md": false, 00:10:15.729 "write_zeroes": true, 00:10:15.729 "zcopy": false, 00:10:15.729 "get_zone_info": false, 00:10:15.729 "zone_management": false, 00:10:15.729 "zone_append": false, 00:10:15.729 "compare": false, 00:10:15.729 "compare_and_write": false, 00:10:15.729 "abort": false, 00:10:15.729 "seek_hole": false, 00:10:15.729 "seek_data": false, 00:10:15.729 "copy": false, 00:10:15.729 "nvme_iov_md": false 00:10:15.729 }, 00:10:15.729 "memory_domains": [ 00:10:15.729 { 00:10:15.729 "dma_device_id": "system", 00:10:15.729 "dma_device_type": 1 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.729 "dma_device_type": 2 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "system", 00:10:15.729 "dma_device_type": 1 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.729 "dma_device_type": 2 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "system", 00:10:15.729 "dma_device_type": 1 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.729 "dma_device_type": 2 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "system", 00:10:15.729 "dma_device_type": 1 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.729 "dma_device_type": 2 00:10:15.729 } 00:10:15.729 ], 00:10:15.729 "driver_specific": { 00:10:15.729 "raid": { 00:10:15.729 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:15.729 "strip_size_kb": 64, 00:10:15.729 "state": "online", 00:10:15.729 "raid_level": "concat", 00:10:15.729 "superblock": true, 00:10:15.729 "num_base_bdevs": 4, 00:10:15.729 "num_base_bdevs_discovered": 4, 00:10:15.729 "num_base_bdevs_operational": 4, 00:10:15.729 "base_bdevs_list": [ 00:10:15.729 { 00:10:15.729 "name": "pt1", 00:10:15.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.729 "is_configured": true, 00:10:15.729 "data_offset": 2048, 00:10:15.729 "data_size": 63488 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "name": "pt2", 00:10:15.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.729 "is_configured": true, 00:10:15.729 "data_offset": 2048, 00:10:15.729 "data_size": 63488 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "name": "pt3", 00:10:15.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.729 "is_configured": true, 00:10:15.729 "data_offset": 2048, 00:10:15.729 "data_size": 63488 00:10:15.729 }, 00:10:15.729 { 00:10:15.729 "name": "pt4", 00:10:15.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.729 "is_configured": true, 00:10:15.729 "data_offset": 2048, 00:10:15.729 "data_size": 63488 00:10:15.729 } 00:10:15.729 ] 00:10:15.729 } 00:10:15.729 } 00:10:15.729 }' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.729 pt2 00:10:15.729 pt3 00:10:15.729 pt4' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.729 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:15.989 [2024-11-21 04:56:32.507467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92bd65a4-2d71-452a-be2a-3ce78c37c173 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92bd65a4-2d71-452a-be2a-3ce78c37c173 ']' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 [2024-11-21 04:56:32.555023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.989 [2024-11-21 04:56:32.555051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.989 [2024-11-21 04:56:32.555139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.989 [2024-11-21 04:56:32.555221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.989 [2024-11-21 04:56:32.555238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.989 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 [2024-11-21 04:56:32.718819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:15.989 [2024-11-21 04:56:32.721008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:15.989 [2024-11-21 04:56:32.721066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:15.989 [2024-11-21 04:56:32.721099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:15.989 [2024-11-21 04:56:32.721168] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:15.989 [2024-11-21 04:56:32.721227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:15.989 [2024-11-21 04:56:32.721252] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:15.989 [2024-11-21 04:56:32.721270] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:15.989 [2024-11-21 04:56:32.721288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.989 [2024-11-21 04:56:32.721298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:16.250 request: 00:10:16.250 { 00:10:16.250 "name": "raid_bdev1", 00:10:16.250 "raid_level": "concat", 00:10:16.250 "base_bdevs": [ 00:10:16.250 "malloc1", 00:10:16.250 "malloc2", 00:10:16.250 "malloc3", 00:10:16.250 "malloc4" 00:10:16.250 ], 00:10:16.250 "strip_size_kb": 64, 00:10:16.250 "superblock": false, 00:10:16.250 "method": "bdev_raid_create", 00:10:16.250 "req_id": 1 00:10:16.250 } 00:10:16.250 Got JSON-RPC error response 00:10:16.250 response: 00:10:16.250 { 00:10:16.250 "code": -17, 00:10:16.250 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:16.250 } 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 [2024-11-21 04:56:32.774668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.250 [2024-11-21 04:56:32.774773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.250 [2024-11-21 04:56:32.774814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.250 [2024-11-21 04:56:32.774843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.250 [2024-11-21 04:56:32.777287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.250 [2024-11-21 04:56:32.777375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.250 [2024-11-21 04:56:32.777490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:16.250 [2024-11-21 04:56:32.777571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.250 pt1 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.250 "name": "raid_bdev1", 00:10:16.250 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:16.250 "strip_size_kb": 64, 00:10:16.250 "state": "configuring", 00:10:16.250 "raid_level": "concat", 00:10:16.250 "superblock": true, 00:10:16.250 "num_base_bdevs": 4, 00:10:16.250 "num_base_bdevs_discovered": 1, 00:10:16.250 "num_base_bdevs_operational": 4, 00:10:16.250 "base_bdevs_list": [ 00:10:16.250 { 00:10:16.250 "name": "pt1", 00:10:16.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.250 "is_configured": true, 00:10:16.250 "data_offset": 2048, 00:10:16.250 "data_size": 63488 00:10:16.250 }, 00:10:16.250 { 00:10:16.250 "name": null, 00:10:16.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.250 "is_configured": false, 00:10:16.250 "data_offset": 2048, 00:10:16.250 "data_size": 63488 00:10:16.250 }, 00:10:16.250 { 00:10:16.250 "name": null, 00:10:16.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.250 "is_configured": false, 00:10:16.250 "data_offset": 2048, 00:10:16.250 "data_size": 63488 00:10:16.250 }, 00:10:16.250 { 00:10:16.250 "name": null, 00:10:16.250 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.250 "is_configured": false, 00:10:16.250 "data_offset": 2048, 00:10:16.250 "data_size": 63488 00:10:16.250 } 00:10:16.250 ] 00:10:16.250 }' 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.250 04:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.510 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:16.510 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.510 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.510 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.770 [2024-11-21 04:56:33.245922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.770 [2024-11-21 04:56:33.245985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.770 [2024-11-21 04:56:33.246007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:16.770 [2024-11-21 04:56:33.246016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.770 [2024-11-21 04:56:33.246451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.770 [2024-11-21 04:56:33.246474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.770 [2024-11-21 04:56:33.246557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.770 [2024-11-21 04:56:33.246581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.770 pt2 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.770 [2024-11-21 04:56:33.257866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.770 "name": "raid_bdev1", 00:10:16.770 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:16.770 "strip_size_kb": 64, 00:10:16.770 "state": "configuring", 00:10:16.770 "raid_level": "concat", 00:10:16.770 "superblock": true, 00:10:16.770 "num_base_bdevs": 4, 00:10:16.770 "num_base_bdevs_discovered": 1, 00:10:16.770 "num_base_bdevs_operational": 4, 00:10:16.770 "base_bdevs_list": [ 00:10:16.770 { 00:10:16.770 "name": "pt1", 00:10:16.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.770 "is_configured": true, 00:10:16.770 "data_offset": 2048, 00:10:16.770 "data_size": 63488 00:10:16.770 }, 00:10:16.770 { 00:10:16.770 "name": null, 00:10:16.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.770 "is_configured": false, 00:10:16.770 "data_offset": 0, 00:10:16.770 "data_size": 63488 00:10:16.770 }, 00:10:16.770 { 00:10:16.770 "name": null, 00:10:16.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.770 "is_configured": false, 00:10:16.770 "data_offset": 2048, 00:10:16.770 "data_size": 63488 00:10:16.770 }, 00:10:16.770 { 00:10:16.770 "name": null, 00:10:16.770 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.770 "is_configured": false, 00:10:16.770 "data_offset": 2048, 00:10:16.770 "data_size": 63488 00:10:16.770 } 00:10:16.770 ] 00:10:16.770 }' 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.770 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.031 [2024-11-21 04:56:33.669201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.031 [2024-11-21 04:56:33.669320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.031 [2024-11-21 04:56:33.669354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:17.031 [2024-11-21 04:56:33.669384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.031 [2024-11-21 04:56:33.669833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.031 [2024-11-21 04:56:33.669894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.031 [2024-11-21 04:56:33.670021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.031 [2024-11-21 04:56:33.670077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.031 pt2 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.031 [2024-11-21 04:56:33.681158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.031 [2024-11-21 04:56:33.681236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.031 [2024-11-21 04:56:33.681266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:17.031 [2024-11-21 04:56:33.681294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.031 [2024-11-21 04:56:33.681659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.031 [2024-11-21 04:56:33.681715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.031 [2024-11-21 04:56:33.681811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.031 [2024-11-21 04:56:33.681860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.031 pt3 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.031 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.031 [2024-11-21 04:56:33.693113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.031 [2024-11-21 04:56:33.693210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.031 [2024-11-21 04:56:33.693242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:17.031 [2024-11-21 04:56:33.693270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.031 [2024-11-21 04:56:33.693608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.031 [2024-11-21 04:56:33.693663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.031 [2024-11-21 04:56:33.693756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:17.031 [2024-11-21 04:56:33.693803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.031 [2024-11-21 04:56:33.693934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:17.031 [2024-11-21 04:56:33.693977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:17.031 [2024-11-21 04:56:33.694239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.031 [2024-11-21 04:56:33.694363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:17.031 [2024-11-21 04:56:33.694372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:17.031 [2024-11-21 04:56:33.694466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.032 pt4 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.032 "name": "raid_bdev1", 00:10:17.032 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:17.032 "strip_size_kb": 64, 00:10:17.032 "state": "online", 00:10:17.032 "raid_level": "concat", 00:10:17.032 "superblock": true, 00:10:17.032 "num_base_bdevs": 4, 00:10:17.032 "num_base_bdevs_discovered": 4, 00:10:17.032 "num_base_bdevs_operational": 4, 00:10:17.032 "base_bdevs_list": [ 00:10:17.032 { 00:10:17.032 "name": "pt1", 00:10:17.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.032 "is_configured": true, 00:10:17.032 "data_offset": 2048, 00:10:17.032 "data_size": 63488 00:10:17.032 }, 00:10:17.032 { 00:10:17.032 "name": "pt2", 00:10:17.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.032 "is_configured": true, 00:10:17.032 "data_offset": 2048, 00:10:17.032 "data_size": 63488 00:10:17.032 }, 00:10:17.032 { 00:10:17.032 "name": "pt3", 00:10:17.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.032 "is_configured": true, 00:10:17.032 "data_offset": 2048, 00:10:17.032 "data_size": 63488 00:10:17.032 }, 00:10:17.032 { 00:10:17.032 "name": "pt4", 00:10:17.032 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.032 "is_configured": true, 00:10:17.032 "data_offset": 2048, 00:10:17.032 "data_size": 63488 00:10:17.032 } 00:10:17.032 ] 00:10:17.032 }' 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.032 04:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.602 [2024-11-21 04:56:34.144712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.602 "name": "raid_bdev1", 00:10:17.602 "aliases": [ 00:10:17.602 "92bd65a4-2d71-452a-be2a-3ce78c37c173" 00:10:17.602 ], 00:10:17.602 "product_name": "Raid Volume", 00:10:17.602 "block_size": 512, 00:10:17.602 "num_blocks": 253952, 00:10:17.602 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:17.602 "assigned_rate_limits": { 00:10:17.602 "rw_ios_per_sec": 0, 00:10:17.602 "rw_mbytes_per_sec": 0, 00:10:17.602 "r_mbytes_per_sec": 0, 00:10:17.602 "w_mbytes_per_sec": 0 00:10:17.602 }, 00:10:17.602 "claimed": false, 00:10:17.602 "zoned": false, 00:10:17.602 "supported_io_types": { 00:10:17.602 "read": true, 00:10:17.602 "write": true, 00:10:17.602 "unmap": true, 00:10:17.602 "flush": true, 00:10:17.602 "reset": true, 00:10:17.602 "nvme_admin": false, 00:10:17.602 "nvme_io": false, 00:10:17.602 "nvme_io_md": false, 00:10:17.602 "write_zeroes": true, 00:10:17.602 "zcopy": false, 00:10:17.602 "get_zone_info": false, 00:10:17.602 "zone_management": false, 00:10:17.602 "zone_append": false, 00:10:17.602 "compare": false, 00:10:17.602 "compare_and_write": false, 00:10:17.602 "abort": false, 00:10:17.602 "seek_hole": false, 00:10:17.602 "seek_data": false, 00:10:17.602 "copy": false, 00:10:17.602 "nvme_iov_md": false 00:10:17.602 }, 00:10:17.602 "memory_domains": [ 00:10:17.602 { 00:10:17.602 "dma_device_id": "system", 00:10:17.602 "dma_device_type": 1 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.602 "dma_device_type": 2 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "system", 00:10:17.602 "dma_device_type": 1 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.602 "dma_device_type": 2 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "system", 00:10:17.602 "dma_device_type": 1 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.602 "dma_device_type": 2 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "system", 00:10:17.602 "dma_device_type": 1 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.602 "dma_device_type": 2 00:10:17.602 } 00:10:17.602 ], 00:10:17.602 "driver_specific": { 00:10:17.602 "raid": { 00:10:17.602 "uuid": "92bd65a4-2d71-452a-be2a-3ce78c37c173", 00:10:17.602 "strip_size_kb": 64, 00:10:17.602 "state": "online", 00:10:17.602 "raid_level": "concat", 00:10:17.602 "superblock": true, 00:10:17.602 "num_base_bdevs": 4, 00:10:17.602 "num_base_bdevs_discovered": 4, 00:10:17.602 "num_base_bdevs_operational": 4, 00:10:17.602 "base_bdevs_list": [ 00:10:17.602 { 00:10:17.602 "name": "pt1", 00:10:17.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "pt2", 00:10:17.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "pt3", 00:10:17.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "pt4", 00:10:17.602 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 } 00:10:17.602 ] 00:10:17.602 } 00:10:17.602 } 00:10:17.602 }' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.602 pt2 00:10:17.602 pt3 00:10:17.602 pt4' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.602 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.883 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.883 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.883 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.884 [2024-11-21 04:56:34.472167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92bd65a4-2d71-452a-be2a-3ce78c37c173 '!=' 92bd65a4-2d71-452a-be2a-3ce78c37c173 ']' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83623 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83623 ']' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83623 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83623 00:10:17.884 killing process with pid 83623 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83623' 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83623 00:10:17.884 [2024-11-21 04:56:34.550397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.884 [2024-11-21 04:56:34.550500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.884 [2024-11-21 04:56:34.550571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.884 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83623 00:10:17.884 [2024-11-21 04:56:34.550584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:17.884 [2024-11-21 04:56:34.594598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.157 04:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:18.157 00:10:18.157 real 0m4.071s 00:10:18.157 user 0m6.378s 00:10:18.157 sys 0m0.940s 00:10:18.157 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.157 04:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.157 ************************************ 00:10:18.157 END TEST raid_superblock_test 00:10:18.157 ************************************ 00:10:18.157 04:56:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:18.157 04:56:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.157 04:56:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.157 04:56:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.157 ************************************ 00:10:18.157 START TEST raid_read_error_test 00:10:18.157 ************************************ 00:10:18.157 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:18.157 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:18.158 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:18.158 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OFzb471cZo 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83871 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83871 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83871 ']' 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.418 04:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.418 [2024-11-21 04:56:34.986455] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:18.418 [2024-11-21 04:56:34.986646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83871 ] 00:10:18.418 [2024-11-21 04:56:35.134327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.679 [2024-11-21 04:56:35.161918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.679 [2024-11-21 04:56:35.203581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.679 [2024-11-21 04:56:35.203698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 BaseBdev1_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 true 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 [2024-11-21 04:56:35.845174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.250 [2024-11-21 04:56:35.845229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.250 [2024-11-21 04:56:35.845250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.250 [2024-11-21 04:56:35.845258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.250 [2024-11-21 04:56:35.847368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.250 [2024-11-21 04:56:35.847407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.250 BaseBdev1 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 BaseBdev2_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 true 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 [2024-11-21 04:56:35.881682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.250 [2024-11-21 04:56:35.881728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.250 [2024-11-21 04:56:35.881746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.250 [2024-11-21 04:56:35.881755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.250 [2024-11-21 04:56:35.883829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.250 [2024-11-21 04:56:35.883868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.250 BaseBdev2 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 BaseBdev3_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 true 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 [2024-11-21 04:56:35.922048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.250 [2024-11-21 04:56:35.922433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.250 [2024-11-21 04:56:35.922494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:19.250 [2024-11-21 04:56:35.922507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.250 [2024-11-21 04:56:35.925227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.250 [2024-11-21 04:56:35.925294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.250 BaseBdev3 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 BaseBdev4_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 true 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.250 [2024-11-21 04:56:35.970904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.250 [2024-11-21 04:56:35.970961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.250 [2024-11-21 04:56:35.970984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.250 [2024-11-21 04:56:35.970993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.250 [2024-11-21 04:56:35.973235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.250 [2024-11-21 04:56:35.973273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.250 BaseBdev4 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.250 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 [2024-11-21 04:56:35.982954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.511 [2024-11-21 04:56:35.985099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.511 [2024-11-21 04:56:35.985204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.511 [2024-11-21 04:56:35.985266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.511 [2024-11-21 04:56:35.985507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:19.511 [2024-11-21 04:56:35.985533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.511 [2024-11-21 04:56:35.985824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:19.511 [2024-11-21 04:56:35.985977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:19.511 [2024-11-21 04:56:35.985990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:19.511 [2024-11-21 04:56:35.986196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.511 04:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 04:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.511 04:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.511 "name": "raid_bdev1", 00:10:19.511 "uuid": "1ff87c9e-5a2d-40da-af10-d9219f9c97b9", 00:10:19.511 "strip_size_kb": 64, 00:10:19.511 "state": "online", 00:10:19.511 "raid_level": "concat", 00:10:19.511 "superblock": true, 00:10:19.511 "num_base_bdevs": 4, 00:10:19.511 "num_base_bdevs_discovered": 4, 00:10:19.511 "num_base_bdevs_operational": 4, 00:10:19.511 "base_bdevs_list": [ 00:10:19.511 { 00:10:19.511 "name": "BaseBdev1", 00:10:19.511 "uuid": "ef0e8b6a-f008-5f10-ad98-2340f49f1186", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 2048, 00:10:19.511 "data_size": 63488 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev2", 00:10:19.511 "uuid": "ca5b1e41-b0b2-5456-84d3-948d6a0aec81", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 2048, 00:10:19.511 "data_size": 63488 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev3", 00:10:19.511 "uuid": "7e8fad8b-47fe-583a-ba9a-1d423f7409e4", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 2048, 00:10:19.511 "data_size": 63488 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev4", 00:10:19.511 "uuid": "1530d698-0224-5bd3-aa7d-9e66994bed6e", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 2048, 00:10:19.511 "data_size": 63488 00:10:19.511 } 00:10:19.511 ] 00:10:19.511 }' 00:10:19.511 04:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.511 04:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.772 04:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:19.772 04:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.032 [2024-11-21 04:56:36.530389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.974 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.974 "name": "raid_bdev1", 00:10:20.974 "uuid": "1ff87c9e-5a2d-40da-af10-d9219f9c97b9", 00:10:20.975 "strip_size_kb": 64, 00:10:20.975 "state": "online", 00:10:20.975 "raid_level": "concat", 00:10:20.975 "superblock": true, 00:10:20.975 "num_base_bdevs": 4, 00:10:20.975 "num_base_bdevs_discovered": 4, 00:10:20.975 "num_base_bdevs_operational": 4, 00:10:20.975 "base_bdevs_list": [ 00:10:20.975 { 00:10:20.975 "name": "BaseBdev1", 00:10:20.975 "uuid": "ef0e8b6a-f008-5f10-ad98-2340f49f1186", 00:10:20.975 "is_configured": true, 00:10:20.975 "data_offset": 2048, 00:10:20.975 "data_size": 63488 00:10:20.975 }, 00:10:20.975 { 00:10:20.975 "name": "BaseBdev2", 00:10:20.975 "uuid": "ca5b1e41-b0b2-5456-84d3-948d6a0aec81", 00:10:20.975 "is_configured": true, 00:10:20.975 "data_offset": 2048, 00:10:20.975 "data_size": 63488 00:10:20.975 }, 00:10:20.975 { 00:10:20.975 "name": "BaseBdev3", 00:10:20.975 "uuid": "7e8fad8b-47fe-583a-ba9a-1d423f7409e4", 00:10:20.975 "is_configured": true, 00:10:20.975 "data_offset": 2048, 00:10:20.975 "data_size": 63488 00:10:20.975 }, 00:10:20.975 { 00:10:20.975 "name": "BaseBdev4", 00:10:20.975 "uuid": "1530d698-0224-5bd3-aa7d-9e66994bed6e", 00:10:20.975 "is_configured": true, 00:10:20.975 "data_offset": 2048, 00:10:20.975 "data_size": 63488 00:10:20.975 } 00:10:20.975 ] 00:10:20.975 }' 00:10:20.975 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.975 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 [2024-11-21 04:56:37.882326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.235 [2024-11-21 04:56:37.882424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.235 [2024-11-21 04:56:37.885190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.235 [2024-11-21 04:56:37.885294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.235 [2024-11-21 04:56:37.885363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.235 [2024-11-21 04:56:37.885424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 { 00:10:21.235 "results": [ 00:10:21.235 { 00:10:21.235 "job": "raid_bdev1", 00:10:21.235 "core_mask": "0x1", 00:10:21.235 "workload": "randrw", 00:10:21.235 "percentage": 50, 00:10:21.235 "status": "finished", 00:10:21.235 "queue_depth": 1, 00:10:21.235 "io_size": 131072, 00:10:21.235 "runtime": 1.352727, 00:10:21.235 "iops": 16519.22376059619, 00:10:21.235 "mibps": 2064.9029700745236, 00:10:21.235 "io_failed": 1, 00:10:21.235 "io_timeout": 0, 00:10:21.235 "avg_latency_us": 83.97713382588209, 00:10:21.235 "min_latency_us": 25.3764192139738, 00:10:21.235 "max_latency_us": 1373.6803493449781 00:10:21.235 } 00:10:21.235 ], 00:10:21.235 "core_count": 1 00:10:21.235 } 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83871 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83871 ']' 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83871 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83871 00:10:21.235 killing process with pid 83871 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83871' 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83871 00:10:21.235 [2024-11-21 04:56:37.930567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.235 04:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83871 00:10:21.235 [2024-11-21 04:56:37.965963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OFzb471cZo 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:21.495 ************************************ 00:10:21.495 END TEST raid_read_error_test 00:10:21.495 ************************************ 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:21.495 00:10:21.495 real 0m3.294s 00:10:21.495 user 0m4.141s 00:10:21.495 sys 0m0.555s 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.495 04:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.754 04:56:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:21.754 04:56:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.754 04:56:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.754 04:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.754 ************************************ 00:10:21.754 START TEST raid_write_error_test 00:10:21.754 ************************************ 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.754 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xk6ykztwZw 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84000 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84000 00:10:21.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 84000 ']' 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.755 04:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.755 [2024-11-21 04:56:38.356384] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:21.755 [2024-11-21 04:56:38.356517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84000 ] 00:10:22.014 [2024-11-21 04:56:38.504122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.014 [2024-11-21 04:56:38.532285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.014 [2024-11-21 04:56:38.574950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.014 [2024-11-21 04:56:38.574983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.583 BaseBdev1_malloc 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.583 true 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.583 [2024-11-21 04:56:39.241176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.583 [2024-11-21 04:56:39.241265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.583 [2024-11-21 04:56:39.241317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.583 [2024-11-21 04:56:39.241343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.583 [2024-11-21 04:56:39.243448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.583 [2024-11-21 04:56:39.243515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.583 BaseBdev1 00:10:22.583 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.584 BaseBdev2_malloc 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.584 true 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.584 [2024-11-21 04:56:39.281688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.584 [2024-11-21 04:56:39.281740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.584 [2024-11-21 04:56:39.281759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.584 [2024-11-21 04:56:39.281768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.584 [2024-11-21 04:56:39.284082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.584 [2024-11-21 04:56:39.284138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.584 BaseBdev2 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.584 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 BaseBdev3_malloc 00:10:22.585 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.585 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.585 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.585 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 true 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 [2024-11-21 04:56:39.322139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.849 [2024-11-21 04:56:39.322184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.849 [2024-11-21 04:56:39.322204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.849 [2024-11-21 04:56:39.322212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.849 [2024-11-21 04:56:39.324318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.849 [2024-11-21 04:56:39.324393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.849 BaseBdev3 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 BaseBdev4_malloc 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 true 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 [2024-11-21 04:56:39.372131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:22.849 [2024-11-21 04:56:39.372239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.849 [2024-11-21 04:56:39.372266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.849 [2024-11-21 04:56:39.372275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.849 [2024-11-21 04:56:39.374507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.849 [2024-11-21 04:56:39.374543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:22.849 BaseBdev4 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.849 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 [2024-11-21 04:56:39.384163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.849 [2024-11-21 04:56:39.386127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.849 [2024-11-21 04:56:39.386266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.849 [2024-11-21 04:56:39.386369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.849 [2024-11-21 04:56:39.386634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:22.849 [2024-11-21 04:56:39.386683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.849 [2024-11-21 04:56:39.386978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:22.849 [2024-11-21 04:56:39.387127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:22.849 [2024-11-21 04:56:39.387143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:22.849 [2024-11-21 04:56:39.387290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.850 "name": "raid_bdev1", 00:10:22.850 "uuid": "c4f7a2c0-cbe7-4fb7-be9b-97c688cc4ac1", 00:10:22.850 "strip_size_kb": 64, 00:10:22.850 "state": "online", 00:10:22.850 "raid_level": "concat", 00:10:22.850 "superblock": true, 00:10:22.850 "num_base_bdevs": 4, 00:10:22.850 "num_base_bdevs_discovered": 4, 00:10:22.850 "num_base_bdevs_operational": 4, 00:10:22.850 "base_bdevs_list": [ 00:10:22.850 { 00:10:22.850 "name": "BaseBdev1", 00:10:22.850 "uuid": "5697fa41-6492-5bee-b785-a719c2662e85", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "BaseBdev2", 00:10:22.850 "uuid": "a1939748-b2c3-58cb-9cfb-12700862ccfb", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "BaseBdev3", 00:10:22.850 "uuid": "3ac59b64-9937-56d4-87c1-a79b0500bf93", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "BaseBdev4", 00:10:22.850 "uuid": "b6163077-bff1-5813-b23f-0faaa061b66a", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 } 00:10:22.850 ] 00:10:22.850 }' 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.850 04:56:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.419 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.419 04:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.419 [2024-11-21 04:56:39.947572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.368 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.368 "name": "raid_bdev1", 00:10:24.368 "uuid": "c4f7a2c0-cbe7-4fb7-be9b-97c688cc4ac1", 00:10:24.368 "strip_size_kb": 64, 00:10:24.368 "state": "online", 00:10:24.368 "raid_level": "concat", 00:10:24.368 "superblock": true, 00:10:24.369 "num_base_bdevs": 4, 00:10:24.369 "num_base_bdevs_discovered": 4, 00:10:24.369 "num_base_bdevs_operational": 4, 00:10:24.369 "base_bdevs_list": [ 00:10:24.369 { 00:10:24.369 "name": "BaseBdev1", 00:10:24.369 "uuid": "5697fa41-6492-5bee-b785-a719c2662e85", 00:10:24.369 "is_configured": true, 00:10:24.369 "data_offset": 2048, 00:10:24.369 "data_size": 63488 00:10:24.369 }, 00:10:24.369 { 00:10:24.369 "name": "BaseBdev2", 00:10:24.369 "uuid": "a1939748-b2c3-58cb-9cfb-12700862ccfb", 00:10:24.369 "is_configured": true, 00:10:24.369 "data_offset": 2048, 00:10:24.369 "data_size": 63488 00:10:24.369 }, 00:10:24.369 { 00:10:24.369 "name": "BaseBdev3", 00:10:24.369 "uuid": "3ac59b64-9937-56d4-87c1-a79b0500bf93", 00:10:24.369 "is_configured": true, 00:10:24.369 "data_offset": 2048, 00:10:24.369 "data_size": 63488 00:10:24.369 }, 00:10:24.369 { 00:10:24.369 "name": "BaseBdev4", 00:10:24.369 "uuid": "b6163077-bff1-5813-b23f-0faaa061b66a", 00:10:24.369 "is_configured": true, 00:10:24.369 "data_offset": 2048, 00:10:24.369 "data_size": 63488 00:10:24.369 } 00:10:24.369 ] 00:10:24.369 }' 00:10:24.369 04:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.369 04:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 [2024-11-21 04:56:41.315548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.644 [2024-11-21 04:56:41.315585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.644 [2024-11-21 04:56:41.318572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.644 [2024-11-21 04:56:41.318681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.644 [2024-11-21 04:56:41.318773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.644 [2024-11-21 04:56:41.318835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:24.644 { 00:10:24.644 "results": [ 00:10:24.644 { 00:10:24.644 "job": "raid_bdev1", 00:10:24.644 "core_mask": "0x1", 00:10:24.644 "workload": "randrw", 00:10:24.644 "percentage": 50, 00:10:24.644 "status": "finished", 00:10:24.644 "queue_depth": 1, 00:10:24.644 "io_size": 131072, 00:10:24.644 "runtime": 1.368687, 00:10:24.644 "iops": 16442.03532290436, 00:10:24.644 "mibps": 2055.254415363045, 00:10:24.644 "io_failed": 1, 00:10:24.644 "io_timeout": 0, 00:10:24.644 "avg_latency_us": 84.36412147130817, 00:10:24.644 "min_latency_us": 24.370305676855896, 00:10:24.644 "max_latency_us": 1523.926637554585 00:10:24.644 } 00:10:24.644 ], 00:10:24.644 "core_count": 1 00:10:24.644 } 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84000 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 84000 ']' 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 84000 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84000 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84000' 00:10:24.644 killing process with pid 84000 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 84000 00:10:24.644 [2024-11-21 04:56:41.350183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.644 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 84000 00:10:24.904 [2024-11-21 04:56:41.385789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xk6ykztwZw 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:24.904 00:10:24.904 real 0m3.352s 00:10:24.904 user 0m4.226s 00:10:24.904 sys 0m0.568s 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.904 04:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 ************************************ 00:10:24.904 END TEST raid_write_error_test 00:10:24.904 ************************************ 00:10:25.164 04:56:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:25.164 04:56:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:25.164 04:56:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.164 04:56:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.164 04:56:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.164 ************************************ 00:10:25.164 START TEST raid_state_function_test 00:10:25.164 ************************************ 00:10:25.164 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:25.164 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:25.164 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.164 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84133 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84133' 00:10:25.165 Process raid pid: 84133 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84133 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84133 ']' 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.165 04:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.165 [2024-11-21 04:56:41.774013] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:25.165 [2024-11-21 04:56:41.774324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.425 [2024-11-21 04:56:41.958329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.425 [2024-11-21 04:56:41.984984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.425 [2024-11-21 04:56:42.026349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.425 [2024-11-21 04:56:42.026459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.994 [2024-11-21 04:56:42.623199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.994 [2024-11-21 04:56:42.623335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.994 [2024-11-21 04:56:42.623384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.994 [2024-11-21 04:56:42.623415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.994 [2024-11-21 04:56:42.623437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.994 [2024-11-21 04:56:42.623494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.994 [2024-11-21 04:56:42.623537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.994 [2024-11-21 04:56:42.623578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.994 "name": "Existed_Raid", 00:10:25.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.994 "strip_size_kb": 0, 00:10:25.994 "state": "configuring", 00:10:25.994 "raid_level": "raid1", 00:10:25.994 "superblock": false, 00:10:25.994 "num_base_bdevs": 4, 00:10:25.994 "num_base_bdevs_discovered": 0, 00:10:25.994 "num_base_bdevs_operational": 4, 00:10:25.994 "base_bdevs_list": [ 00:10:25.994 { 00:10:25.994 "name": "BaseBdev1", 00:10:25.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.994 "is_configured": false, 00:10:25.994 "data_offset": 0, 00:10:25.994 "data_size": 0 00:10:25.994 }, 00:10:25.994 { 00:10:25.994 "name": "BaseBdev2", 00:10:25.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.994 "is_configured": false, 00:10:25.994 "data_offset": 0, 00:10:25.994 "data_size": 0 00:10:25.994 }, 00:10:25.994 { 00:10:25.994 "name": "BaseBdev3", 00:10:25.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.994 "is_configured": false, 00:10:25.994 "data_offset": 0, 00:10:25.994 "data_size": 0 00:10:25.994 }, 00:10:25.994 { 00:10:25.994 "name": "BaseBdev4", 00:10:25.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.994 "is_configured": false, 00:10:25.994 "data_offset": 0, 00:10:25.994 "data_size": 0 00:10:25.994 } 00:10:25.994 ] 00:10:25.994 }' 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.994 04:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 [2024-11-21 04:56:43.110246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.563 [2024-11-21 04:56:43.110286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 [2024-11-21 04:56:43.122248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.563 [2024-11-21 04:56:43.122288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.563 [2024-11-21 04:56:43.122297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.563 [2024-11-21 04:56:43.122306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.563 [2024-11-21 04:56:43.122312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.563 [2024-11-21 04:56:43.122321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.563 [2024-11-21 04:56:43.122327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.563 [2024-11-21 04:56:43.122335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 [2024-11-21 04:56:43.143384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.563 BaseBdev1 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.563 [ 00:10:26.563 { 00:10:26.563 "name": "BaseBdev1", 00:10:26.563 "aliases": [ 00:10:26.563 "73a9a29e-edab-4c70-8b25-046476997fb7" 00:10:26.563 ], 00:10:26.563 "product_name": "Malloc disk", 00:10:26.563 "block_size": 512, 00:10:26.563 "num_blocks": 65536, 00:10:26.563 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:26.563 "assigned_rate_limits": { 00:10:26.563 "rw_ios_per_sec": 0, 00:10:26.563 "rw_mbytes_per_sec": 0, 00:10:26.563 "r_mbytes_per_sec": 0, 00:10:26.563 "w_mbytes_per_sec": 0 00:10:26.563 }, 00:10:26.563 "claimed": true, 00:10:26.563 "claim_type": "exclusive_write", 00:10:26.563 "zoned": false, 00:10:26.563 "supported_io_types": { 00:10:26.563 "read": true, 00:10:26.563 "write": true, 00:10:26.563 "unmap": true, 00:10:26.563 "flush": true, 00:10:26.563 "reset": true, 00:10:26.563 "nvme_admin": false, 00:10:26.563 "nvme_io": false, 00:10:26.563 "nvme_io_md": false, 00:10:26.563 "write_zeroes": true, 00:10:26.563 "zcopy": true, 00:10:26.563 "get_zone_info": false, 00:10:26.563 "zone_management": false, 00:10:26.563 "zone_append": false, 00:10:26.563 "compare": false, 00:10:26.563 "compare_and_write": false, 00:10:26.563 "abort": true, 00:10:26.563 "seek_hole": false, 00:10:26.563 "seek_data": false, 00:10:26.563 "copy": true, 00:10:26.563 "nvme_iov_md": false 00:10:26.563 }, 00:10:26.563 "memory_domains": [ 00:10:26.563 { 00:10:26.563 "dma_device_id": "system", 00:10:26.563 "dma_device_type": 1 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.563 "dma_device_type": 2 00:10:26.563 } 00:10:26.563 ], 00:10:26.563 "driver_specific": {} 00:10:26.563 } 00:10:26.563 ] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.563 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.564 "name": "Existed_Raid", 00:10:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.564 "strip_size_kb": 0, 00:10:26.564 "state": "configuring", 00:10:26.564 "raid_level": "raid1", 00:10:26.564 "superblock": false, 00:10:26.564 "num_base_bdevs": 4, 00:10:26.564 "num_base_bdevs_discovered": 1, 00:10:26.564 "num_base_bdevs_operational": 4, 00:10:26.564 "base_bdevs_list": [ 00:10:26.564 { 00:10:26.564 "name": "BaseBdev1", 00:10:26.564 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:26.564 "is_configured": true, 00:10:26.564 "data_offset": 0, 00:10:26.564 "data_size": 65536 00:10:26.564 }, 00:10:26.564 { 00:10:26.564 "name": "BaseBdev2", 00:10:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.564 "is_configured": false, 00:10:26.564 "data_offset": 0, 00:10:26.564 "data_size": 0 00:10:26.564 }, 00:10:26.564 { 00:10:26.564 "name": "BaseBdev3", 00:10:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.564 "is_configured": false, 00:10:26.564 "data_offset": 0, 00:10:26.564 "data_size": 0 00:10:26.564 }, 00:10:26.564 { 00:10:26.564 "name": "BaseBdev4", 00:10:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.564 "is_configured": false, 00:10:26.564 "data_offset": 0, 00:10:26.564 "data_size": 0 00:10:26.564 } 00:10:26.564 ] 00:10:26.564 }' 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.564 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.132 [2024-11-21 04:56:43.606634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.132 [2024-11-21 04:56:43.606735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.132 [2024-11-21 04:56:43.614636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.132 [2024-11-21 04:56:43.616567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.132 [2024-11-21 04:56:43.616609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.132 [2024-11-21 04:56:43.616618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.132 [2024-11-21 04:56:43.616627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.132 [2024-11-21 04:56:43.616634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.132 [2024-11-21 04:56:43.616643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.132 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.132 "name": "Existed_Raid", 00:10:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.132 "strip_size_kb": 0, 00:10:27.132 "state": "configuring", 00:10:27.132 "raid_level": "raid1", 00:10:27.132 "superblock": false, 00:10:27.132 "num_base_bdevs": 4, 00:10:27.132 "num_base_bdevs_discovered": 1, 00:10:27.132 "num_base_bdevs_operational": 4, 00:10:27.132 "base_bdevs_list": [ 00:10:27.132 { 00:10:27.132 "name": "BaseBdev1", 00:10:27.132 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:27.132 "is_configured": true, 00:10:27.132 "data_offset": 0, 00:10:27.132 "data_size": 65536 00:10:27.132 }, 00:10:27.132 { 00:10:27.132 "name": "BaseBdev2", 00:10:27.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.133 "is_configured": false, 00:10:27.133 "data_offset": 0, 00:10:27.133 "data_size": 0 00:10:27.133 }, 00:10:27.133 { 00:10:27.133 "name": "BaseBdev3", 00:10:27.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.133 "is_configured": false, 00:10:27.133 "data_offset": 0, 00:10:27.133 "data_size": 0 00:10:27.133 }, 00:10:27.133 { 00:10:27.133 "name": "BaseBdev4", 00:10:27.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.133 "is_configured": false, 00:10:27.133 "data_offset": 0, 00:10:27.133 "data_size": 0 00:10:27.133 } 00:10:27.133 ] 00:10:27.133 }' 00:10:27.133 04:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.133 04:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.392 [2024-11-21 04:56:44.061098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.392 BaseBdev2 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.392 [ 00:10:27.392 { 00:10:27.392 "name": "BaseBdev2", 00:10:27.392 "aliases": [ 00:10:27.392 "f3f366e8-afed-4a8a-b837-8137100866c2" 00:10:27.392 ], 00:10:27.392 "product_name": "Malloc disk", 00:10:27.392 "block_size": 512, 00:10:27.392 "num_blocks": 65536, 00:10:27.392 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:27.392 "assigned_rate_limits": { 00:10:27.392 "rw_ios_per_sec": 0, 00:10:27.392 "rw_mbytes_per_sec": 0, 00:10:27.392 "r_mbytes_per_sec": 0, 00:10:27.392 "w_mbytes_per_sec": 0 00:10:27.392 }, 00:10:27.392 "claimed": true, 00:10:27.392 "claim_type": "exclusive_write", 00:10:27.392 "zoned": false, 00:10:27.392 "supported_io_types": { 00:10:27.392 "read": true, 00:10:27.392 "write": true, 00:10:27.392 "unmap": true, 00:10:27.392 "flush": true, 00:10:27.392 "reset": true, 00:10:27.392 "nvme_admin": false, 00:10:27.392 "nvme_io": false, 00:10:27.392 "nvme_io_md": false, 00:10:27.392 "write_zeroes": true, 00:10:27.392 "zcopy": true, 00:10:27.392 "get_zone_info": false, 00:10:27.392 "zone_management": false, 00:10:27.392 "zone_append": false, 00:10:27.392 "compare": false, 00:10:27.392 "compare_and_write": false, 00:10:27.392 "abort": true, 00:10:27.392 "seek_hole": false, 00:10:27.392 "seek_data": false, 00:10:27.392 "copy": true, 00:10:27.392 "nvme_iov_md": false 00:10:27.392 }, 00:10:27.392 "memory_domains": [ 00:10:27.392 { 00:10:27.392 "dma_device_id": "system", 00:10:27.392 "dma_device_type": 1 00:10:27.392 }, 00:10:27.392 { 00:10:27.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.392 "dma_device_type": 2 00:10:27.392 } 00:10:27.392 ], 00:10:27.392 "driver_specific": {} 00:10:27.392 } 00:10:27.392 ] 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.392 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.652 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.652 "name": "Existed_Raid", 00:10:27.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.652 "strip_size_kb": 0, 00:10:27.652 "state": "configuring", 00:10:27.652 "raid_level": "raid1", 00:10:27.652 "superblock": false, 00:10:27.652 "num_base_bdevs": 4, 00:10:27.652 "num_base_bdevs_discovered": 2, 00:10:27.652 "num_base_bdevs_operational": 4, 00:10:27.652 "base_bdevs_list": [ 00:10:27.652 { 00:10:27.652 "name": "BaseBdev1", 00:10:27.652 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:27.652 "is_configured": true, 00:10:27.652 "data_offset": 0, 00:10:27.652 "data_size": 65536 00:10:27.652 }, 00:10:27.652 { 00:10:27.652 "name": "BaseBdev2", 00:10:27.652 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:27.652 "is_configured": true, 00:10:27.652 "data_offset": 0, 00:10:27.652 "data_size": 65536 00:10:27.652 }, 00:10:27.652 { 00:10:27.652 "name": "BaseBdev3", 00:10:27.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.652 "is_configured": false, 00:10:27.652 "data_offset": 0, 00:10:27.652 "data_size": 0 00:10:27.652 }, 00:10:27.652 { 00:10:27.652 "name": "BaseBdev4", 00:10:27.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.652 "is_configured": false, 00:10:27.652 "data_offset": 0, 00:10:27.652 "data_size": 0 00:10:27.652 } 00:10:27.652 ] 00:10:27.652 }' 00:10:27.652 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.652 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.911 [2024-11-21 04:56:44.597229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.911 BaseBdev3 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.911 [ 00:10:27.911 { 00:10:27.911 "name": "BaseBdev3", 00:10:27.911 "aliases": [ 00:10:27.911 "39524abf-67e0-4277-9f7c-45c2555388cc" 00:10:27.911 ], 00:10:27.911 "product_name": "Malloc disk", 00:10:27.911 "block_size": 512, 00:10:27.911 "num_blocks": 65536, 00:10:27.911 "uuid": "39524abf-67e0-4277-9f7c-45c2555388cc", 00:10:27.911 "assigned_rate_limits": { 00:10:27.911 "rw_ios_per_sec": 0, 00:10:27.911 "rw_mbytes_per_sec": 0, 00:10:27.911 "r_mbytes_per_sec": 0, 00:10:27.911 "w_mbytes_per_sec": 0 00:10:27.911 }, 00:10:27.911 "claimed": true, 00:10:27.911 "claim_type": "exclusive_write", 00:10:27.911 "zoned": false, 00:10:27.911 "supported_io_types": { 00:10:27.911 "read": true, 00:10:27.911 "write": true, 00:10:27.911 "unmap": true, 00:10:27.911 "flush": true, 00:10:27.911 "reset": true, 00:10:27.911 "nvme_admin": false, 00:10:27.911 "nvme_io": false, 00:10:27.911 "nvme_io_md": false, 00:10:27.911 "write_zeroes": true, 00:10:27.911 "zcopy": true, 00:10:27.911 "get_zone_info": false, 00:10:27.911 "zone_management": false, 00:10:27.911 "zone_append": false, 00:10:27.911 "compare": false, 00:10:27.911 "compare_and_write": false, 00:10:27.911 "abort": true, 00:10:27.911 "seek_hole": false, 00:10:27.911 "seek_data": false, 00:10:27.911 "copy": true, 00:10:27.911 "nvme_iov_md": false 00:10:27.911 }, 00:10:27.911 "memory_domains": [ 00:10:27.911 { 00:10:27.911 "dma_device_id": "system", 00:10:27.911 "dma_device_type": 1 00:10:27.911 }, 00:10:27.911 { 00:10:27.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.911 "dma_device_type": 2 00:10:27.911 } 00:10:27.911 ], 00:10:27.911 "driver_specific": {} 00:10:27.911 } 00:10:27.911 ] 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.911 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.912 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.912 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.912 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.171 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.171 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.171 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.171 "name": "Existed_Raid", 00:10:28.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.171 "strip_size_kb": 0, 00:10:28.171 "state": "configuring", 00:10:28.171 "raid_level": "raid1", 00:10:28.171 "superblock": false, 00:10:28.171 "num_base_bdevs": 4, 00:10:28.171 "num_base_bdevs_discovered": 3, 00:10:28.171 "num_base_bdevs_operational": 4, 00:10:28.171 "base_bdevs_list": [ 00:10:28.171 { 00:10:28.171 "name": "BaseBdev1", 00:10:28.171 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:28.171 "is_configured": true, 00:10:28.171 "data_offset": 0, 00:10:28.171 "data_size": 65536 00:10:28.171 }, 00:10:28.171 { 00:10:28.171 "name": "BaseBdev2", 00:10:28.171 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:28.171 "is_configured": true, 00:10:28.171 "data_offset": 0, 00:10:28.171 "data_size": 65536 00:10:28.171 }, 00:10:28.171 { 00:10:28.171 "name": "BaseBdev3", 00:10:28.171 "uuid": "39524abf-67e0-4277-9f7c-45c2555388cc", 00:10:28.171 "is_configured": true, 00:10:28.171 "data_offset": 0, 00:10:28.171 "data_size": 65536 00:10:28.171 }, 00:10:28.171 { 00:10:28.171 "name": "BaseBdev4", 00:10:28.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.171 "is_configured": false, 00:10:28.171 "data_offset": 0, 00:10:28.171 "data_size": 0 00:10:28.171 } 00:10:28.171 ] 00:10:28.171 }' 00:10:28.171 04:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.171 04:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 [2024-11-21 04:56:45.103526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.431 [2024-11-21 04:56:45.103578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:28.431 [2024-11-21 04:56:45.103587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:28.431 [2024-11-21 04:56:45.103900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:28.431 [2024-11-21 04:56:45.104038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:28.431 [2024-11-21 04:56:45.104051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:28.431 [2024-11-21 04:56:45.104273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.431 BaseBdev4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 [ 00:10:28.431 { 00:10:28.431 "name": "BaseBdev4", 00:10:28.431 "aliases": [ 00:10:28.431 "c6fcd271-59cc-4bd6-87a0-ca73b2678650" 00:10:28.431 ], 00:10:28.431 "product_name": "Malloc disk", 00:10:28.431 "block_size": 512, 00:10:28.431 "num_blocks": 65536, 00:10:28.431 "uuid": "c6fcd271-59cc-4bd6-87a0-ca73b2678650", 00:10:28.431 "assigned_rate_limits": { 00:10:28.431 "rw_ios_per_sec": 0, 00:10:28.431 "rw_mbytes_per_sec": 0, 00:10:28.431 "r_mbytes_per_sec": 0, 00:10:28.431 "w_mbytes_per_sec": 0 00:10:28.431 }, 00:10:28.431 "claimed": true, 00:10:28.431 "claim_type": "exclusive_write", 00:10:28.431 "zoned": false, 00:10:28.431 "supported_io_types": { 00:10:28.431 "read": true, 00:10:28.431 "write": true, 00:10:28.431 "unmap": true, 00:10:28.431 "flush": true, 00:10:28.431 "reset": true, 00:10:28.431 "nvme_admin": false, 00:10:28.431 "nvme_io": false, 00:10:28.431 "nvme_io_md": false, 00:10:28.431 "write_zeroes": true, 00:10:28.431 "zcopy": true, 00:10:28.431 "get_zone_info": false, 00:10:28.431 "zone_management": false, 00:10:28.431 "zone_append": false, 00:10:28.431 "compare": false, 00:10:28.431 "compare_and_write": false, 00:10:28.431 "abort": true, 00:10:28.431 "seek_hole": false, 00:10:28.431 "seek_data": false, 00:10:28.431 "copy": true, 00:10:28.431 "nvme_iov_md": false 00:10:28.431 }, 00:10:28.431 "memory_domains": [ 00:10:28.431 { 00:10:28.431 "dma_device_id": "system", 00:10:28.431 "dma_device_type": 1 00:10:28.431 }, 00:10:28.431 { 00:10:28.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.431 "dma_device_type": 2 00:10:28.431 } 00:10:28.431 ], 00:10:28.431 "driver_specific": {} 00:10:28.431 } 00:10:28.431 ] 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.431 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.690 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.690 "name": "Existed_Raid", 00:10:28.690 "uuid": "71fc7714-99f7-4292-ac76-4341287c1db9", 00:10:28.690 "strip_size_kb": 0, 00:10:28.690 "state": "online", 00:10:28.690 "raid_level": "raid1", 00:10:28.690 "superblock": false, 00:10:28.690 "num_base_bdevs": 4, 00:10:28.690 "num_base_bdevs_discovered": 4, 00:10:28.690 "num_base_bdevs_operational": 4, 00:10:28.690 "base_bdevs_list": [ 00:10:28.690 { 00:10:28.690 "name": "BaseBdev1", 00:10:28.690 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:28.690 "is_configured": true, 00:10:28.690 "data_offset": 0, 00:10:28.690 "data_size": 65536 00:10:28.690 }, 00:10:28.690 { 00:10:28.690 "name": "BaseBdev2", 00:10:28.690 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:28.690 "is_configured": true, 00:10:28.690 "data_offset": 0, 00:10:28.690 "data_size": 65536 00:10:28.690 }, 00:10:28.690 { 00:10:28.690 "name": "BaseBdev3", 00:10:28.690 "uuid": "39524abf-67e0-4277-9f7c-45c2555388cc", 00:10:28.690 "is_configured": true, 00:10:28.690 "data_offset": 0, 00:10:28.690 "data_size": 65536 00:10:28.690 }, 00:10:28.690 { 00:10:28.690 "name": "BaseBdev4", 00:10:28.690 "uuid": "c6fcd271-59cc-4bd6-87a0-ca73b2678650", 00:10:28.690 "is_configured": true, 00:10:28.690 "data_offset": 0, 00:10:28.690 "data_size": 65536 00:10:28.690 } 00:10:28.690 ] 00:10:28.690 }' 00:10:28.690 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.690 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.949 [2024-11-21 04:56:45.611072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.949 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.949 "name": "Existed_Raid", 00:10:28.949 "aliases": [ 00:10:28.949 "71fc7714-99f7-4292-ac76-4341287c1db9" 00:10:28.949 ], 00:10:28.949 "product_name": "Raid Volume", 00:10:28.949 "block_size": 512, 00:10:28.949 "num_blocks": 65536, 00:10:28.949 "uuid": "71fc7714-99f7-4292-ac76-4341287c1db9", 00:10:28.949 "assigned_rate_limits": { 00:10:28.949 "rw_ios_per_sec": 0, 00:10:28.949 "rw_mbytes_per_sec": 0, 00:10:28.949 "r_mbytes_per_sec": 0, 00:10:28.949 "w_mbytes_per_sec": 0 00:10:28.949 }, 00:10:28.949 "claimed": false, 00:10:28.949 "zoned": false, 00:10:28.949 "supported_io_types": { 00:10:28.949 "read": true, 00:10:28.949 "write": true, 00:10:28.949 "unmap": false, 00:10:28.949 "flush": false, 00:10:28.949 "reset": true, 00:10:28.949 "nvme_admin": false, 00:10:28.949 "nvme_io": false, 00:10:28.949 "nvme_io_md": false, 00:10:28.949 "write_zeroes": true, 00:10:28.949 "zcopy": false, 00:10:28.949 "get_zone_info": false, 00:10:28.949 "zone_management": false, 00:10:28.949 "zone_append": false, 00:10:28.949 "compare": false, 00:10:28.949 "compare_and_write": false, 00:10:28.949 "abort": false, 00:10:28.949 "seek_hole": false, 00:10:28.949 "seek_data": false, 00:10:28.949 "copy": false, 00:10:28.949 "nvme_iov_md": false 00:10:28.949 }, 00:10:28.949 "memory_domains": [ 00:10:28.949 { 00:10:28.949 "dma_device_id": "system", 00:10:28.949 "dma_device_type": 1 00:10:28.949 }, 00:10:28.949 { 00:10:28.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.949 "dma_device_type": 2 00:10:28.949 }, 00:10:28.949 { 00:10:28.949 "dma_device_id": "system", 00:10:28.949 "dma_device_type": 1 00:10:28.949 }, 00:10:28.949 { 00:10:28.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.949 "dma_device_type": 2 00:10:28.949 }, 00:10:28.949 { 00:10:28.949 "dma_device_id": "system", 00:10:28.949 "dma_device_type": 1 00:10:28.949 }, 00:10:28.950 { 00:10:28.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.950 "dma_device_type": 2 00:10:28.950 }, 00:10:28.950 { 00:10:28.950 "dma_device_id": "system", 00:10:28.950 "dma_device_type": 1 00:10:28.950 }, 00:10:28.950 { 00:10:28.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.950 "dma_device_type": 2 00:10:28.950 } 00:10:28.950 ], 00:10:28.950 "driver_specific": { 00:10:28.950 "raid": { 00:10:28.950 "uuid": "71fc7714-99f7-4292-ac76-4341287c1db9", 00:10:28.950 "strip_size_kb": 0, 00:10:28.950 "state": "online", 00:10:28.950 "raid_level": "raid1", 00:10:28.950 "superblock": false, 00:10:28.950 "num_base_bdevs": 4, 00:10:28.950 "num_base_bdevs_discovered": 4, 00:10:28.950 "num_base_bdevs_operational": 4, 00:10:28.950 "base_bdevs_list": [ 00:10:28.950 { 00:10:28.950 "name": "BaseBdev1", 00:10:28.950 "uuid": "73a9a29e-edab-4c70-8b25-046476997fb7", 00:10:28.950 "is_configured": true, 00:10:28.950 "data_offset": 0, 00:10:28.950 "data_size": 65536 00:10:28.950 }, 00:10:28.950 { 00:10:28.950 "name": "BaseBdev2", 00:10:28.950 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:28.950 "is_configured": true, 00:10:28.950 "data_offset": 0, 00:10:28.950 "data_size": 65536 00:10:28.950 }, 00:10:28.950 { 00:10:28.950 "name": "BaseBdev3", 00:10:28.950 "uuid": "39524abf-67e0-4277-9f7c-45c2555388cc", 00:10:28.950 "is_configured": true, 00:10:28.950 "data_offset": 0, 00:10:28.950 "data_size": 65536 00:10:28.950 }, 00:10:28.950 { 00:10:28.950 "name": "BaseBdev4", 00:10:28.950 "uuid": "c6fcd271-59cc-4bd6-87a0-ca73b2678650", 00:10:28.950 "is_configured": true, 00:10:28.950 "data_offset": 0, 00:10:28.950 "data_size": 65536 00:10:28.950 } 00:10:28.950 ] 00:10:28.950 } 00:10:28.950 } 00:10:28.950 }' 00:10:28.950 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.209 BaseBdev2 00:10:29.209 BaseBdev3 00:10:29.209 BaseBdev4' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.209 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 [2024-11-21 04:56:45.938232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.468 04:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.468 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.468 "name": "Existed_Raid", 00:10:29.468 "uuid": "71fc7714-99f7-4292-ac76-4341287c1db9", 00:10:29.468 "strip_size_kb": 0, 00:10:29.468 "state": "online", 00:10:29.468 "raid_level": "raid1", 00:10:29.468 "superblock": false, 00:10:29.468 "num_base_bdevs": 4, 00:10:29.468 "num_base_bdevs_discovered": 3, 00:10:29.468 "num_base_bdevs_operational": 3, 00:10:29.468 "base_bdevs_list": [ 00:10:29.468 { 00:10:29.468 "name": null, 00:10:29.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.468 "is_configured": false, 00:10:29.468 "data_offset": 0, 00:10:29.468 "data_size": 65536 00:10:29.468 }, 00:10:29.468 { 00:10:29.468 "name": "BaseBdev2", 00:10:29.468 "uuid": "f3f366e8-afed-4a8a-b837-8137100866c2", 00:10:29.468 "is_configured": true, 00:10:29.468 "data_offset": 0, 00:10:29.468 "data_size": 65536 00:10:29.468 }, 00:10:29.468 { 00:10:29.468 "name": "BaseBdev3", 00:10:29.468 "uuid": "39524abf-67e0-4277-9f7c-45c2555388cc", 00:10:29.468 "is_configured": true, 00:10:29.468 "data_offset": 0, 00:10:29.468 "data_size": 65536 00:10:29.468 }, 00:10:29.468 { 00:10:29.468 "name": "BaseBdev4", 00:10:29.468 "uuid": "c6fcd271-59cc-4bd6-87a0-ca73b2678650", 00:10:29.468 "is_configured": true, 00:10:29.468 "data_offset": 0, 00:10:29.468 "data_size": 65536 00:10:29.468 } 00:10:29.468 ] 00:10:29.468 }' 00:10:29.468 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.468 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.727 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.728 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.987 [2024-11-21 04:56:46.464886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.987 [2024-11-21 04:56:46.535814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.987 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 [2024-11-21 04:56:46.594903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.988 [2024-11-21 04:56:46.595002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.988 [2024-11-21 04:56:46.606481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.988 [2024-11-21 04:56:46.606533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.988 [2024-11-21 04:56:46.606546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 BaseBdev2 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 [ 00:10:29.988 { 00:10:29.988 "name": "BaseBdev2", 00:10:29.988 "aliases": [ 00:10:29.988 "685745f9-82ba-42f6-9d25-60d6046ebd49" 00:10:29.988 ], 00:10:29.988 "product_name": "Malloc disk", 00:10:29.988 "block_size": 512, 00:10:29.988 "num_blocks": 65536, 00:10:29.988 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:29.988 "assigned_rate_limits": { 00:10:29.988 "rw_ios_per_sec": 0, 00:10:29.988 "rw_mbytes_per_sec": 0, 00:10:29.988 "r_mbytes_per_sec": 0, 00:10:29.988 "w_mbytes_per_sec": 0 00:10:29.988 }, 00:10:29.988 "claimed": false, 00:10:29.988 "zoned": false, 00:10:29.988 "supported_io_types": { 00:10:29.988 "read": true, 00:10:29.988 "write": true, 00:10:29.988 "unmap": true, 00:10:29.988 "flush": true, 00:10:29.988 "reset": true, 00:10:29.988 "nvme_admin": false, 00:10:29.988 "nvme_io": false, 00:10:29.988 "nvme_io_md": false, 00:10:29.988 "write_zeroes": true, 00:10:29.988 "zcopy": true, 00:10:29.988 "get_zone_info": false, 00:10:29.988 "zone_management": false, 00:10:29.988 "zone_append": false, 00:10:29.988 "compare": false, 00:10:29.988 "compare_and_write": false, 00:10:29.988 "abort": true, 00:10:29.988 "seek_hole": false, 00:10:29.988 "seek_data": false, 00:10:29.988 "copy": true, 00:10:29.988 "nvme_iov_md": false 00:10:29.988 }, 00:10:29.988 "memory_domains": [ 00:10:29.988 { 00:10:29.988 "dma_device_id": "system", 00:10:29.988 "dma_device_type": 1 00:10:29.988 }, 00:10:29.988 { 00:10:29.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.988 "dma_device_type": 2 00:10:29.988 } 00:10:29.988 ], 00:10:29.988 "driver_specific": {} 00:10:29.988 } 00:10:29.988 ] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 BaseBdev3 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 [ 00:10:30.248 { 00:10:30.248 "name": "BaseBdev3", 00:10:30.248 "aliases": [ 00:10:30.248 "bd87598b-899d-4ea2-8dd4-b2e51ea602ba" 00:10:30.248 ], 00:10:30.248 "product_name": "Malloc disk", 00:10:30.248 "block_size": 512, 00:10:30.248 "num_blocks": 65536, 00:10:30.248 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:30.248 "assigned_rate_limits": { 00:10:30.248 "rw_ios_per_sec": 0, 00:10:30.248 "rw_mbytes_per_sec": 0, 00:10:30.248 "r_mbytes_per_sec": 0, 00:10:30.248 "w_mbytes_per_sec": 0 00:10:30.248 }, 00:10:30.248 "claimed": false, 00:10:30.248 "zoned": false, 00:10:30.248 "supported_io_types": { 00:10:30.248 "read": true, 00:10:30.248 "write": true, 00:10:30.248 "unmap": true, 00:10:30.248 "flush": true, 00:10:30.248 "reset": true, 00:10:30.248 "nvme_admin": false, 00:10:30.248 "nvme_io": false, 00:10:30.248 "nvme_io_md": false, 00:10:30.248 "write_zeroes": true, 00:10:30.248 "zcopy": true, 00:10:30.248 "get_zone_info": false, 00:10:30.248 "zone_management": false, 00:10:30.248 "zone_append": false, 00:10:30.248 "compare": false, 00:10:30.248 "compare_and_write": false, 00:10:30.248 "abort": true, 00:10:30.248 "seek_hole": false, 00:10:30.248 "seek_data": false, 00:10:30.248 "copy": true, 00:10:30.248 "nvme_iov_md": false 00:10:30.248 }, 00:10:30.248 "memory_domains": [ 00:10:30.248 { 00:10:30.248 "dma_device_id": "system", 00:10:30.248 "dma_device_type": 1 00:10:30.248 }, 00:10:30.248 { 00:10:30.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.248 "dma_device_type": 2 00:10:30.248 } 00:10:30.248 ], 00:10:30.248 "driver_specific": {} 00:10:30.248 } 00:10:30.248 ] 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.248 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.249 BaseBdev4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.249 [ 00:10:30.249 { 00:10:30.249 "name": "BaseBdev4", 00:10:30.249 "aliases": [ 00:10:30.249 "ba24dae0-fdc1-44d5-97bf-070b7ff120a6" 00:10:30.249 ], 00:10:30.249 "product_name": "Malloc disk", 00:10:30.249 "block_size": 512, 00:10:30.249 "num_blocks": 65536, 00:10:30.249 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:30.249 "assigned_rate_limits": { 00:10:30.249 "rw_ios_per_sec": 0, 00:10:30.249 "rw_mbytes_per_sec": 0, 00:10:30.249 "r_mbytes_per_sec": 0, 00:10:30.249 "w_mbytes_per_sec": 0 00:10:30.249 }, 00:10:30.249 "claimed": false, 00:10:30.249 "zoned": false, 00:10:30.249 "supported_io_types": { 00:10:30.249 "read": true, 00:10:30.249 "write": true, 00:10:30.249 "unmap": true, 00:10:30.249 "flush": true, 00:10:30.249 "reset": true, 00:10:30.249 "nvme_admin": false, 00:10:30.249 "nvme_io": false, 00:10:30.249 "nvme_io_md": false, 00:10:30.249 "write_zeroes": true, 00:10:30.249 "zcopy": true, 00:10:30.249 "get_zone_info": false, 00:10:30.249 "zone_management": false, 00:10:30.249 "zone_append": false, 00:10:30.249 "compare": false, 00:10:30.249 "compare_and_write": false, 00:10:30.249 "abort": true, 00:10:30.249 "seek_hole": false, 00:10:30.249 "seek_data": false, 00:10:30.249 "copy": true, 00:10:30.249 "nvme_iov_md": false 00:10:30.249 }, 00:10:30.249 "memory_domains": [ 00:10:30.249 { 00:10:30.249 "dma_device_id": "system", 00:10:30.249 "dma_device_type": 1 00:10:30.249 }, 00:10:30.249 { 00:10:30.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.249 "dma_device_type": 2 00:10:30.249 } 00:10:30.249 ], 00:10:30.249 "driver_specific": {} 00:10:30.249 } 00:10:30.249 ] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.249 [2024-11-21 04:56:46.827474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.249 [2024-11-21 04:56:46.827560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.249 [2024-11-21 04:56:46.827601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.249 [2024-11-21 04:56:46.829423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.249 [2024-11-21 04:56:46.829502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.249 "name": "Existed_Raid", 00:10:30.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.249 "strip_size_kb": 0, 00:10:30.249 "state": "configuring", 00:10:30.249 "raid_level": "raid1", 00:10:30.249 "superblock": false, 00:10:30.249 "num_base_bdevs": 4, 00:10:30.249 "num_base_bdevs_discovered": 3, 00:10:30.249 "num_base_bdevs_operational": 4, 00:10:30.249 "base_bdevs_list": [ 00:10:30.249 { 00:10:30.249 "name": "BaseBdev1", 00:10:30.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.249 "is_configured": false, 00:10:30.249 "data_offset": 0, 00:10:30.249 "data_size": 0 00:10:30.249 }, 00:10:30.249 { 00:10:30.249 "name": "BaseBdev2", 00:10:30.249 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:30.249 "is_configured": true, 00:10:30.249 "data_offset": 0, 00:10:30.249 "data_size": 65536 00:10:30.249 }, 00:10:30.249 { 00:10:30.249 "name": "BaseBdev3", 00:10:30.249 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:30.249 "is_configured": true, 00:10:30.249 "data_offset": 0, 00:10:30.249 "data_size": 65536 00:10:30.249 }, 00:10:30.249 { 00:10:30.249 "name": "BaseBdev4", 00:10:30.249 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:30.249 "is_configured": true, 00:10:30.249 "data_offset": 0, 00:10:30.249 "data_size": 65536 00:10:30.249 } 00:10:30.249 ] 00:10:30.249 }' 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.249 04:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 [2024-11-21 04:56:47.282765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.827 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.827 "name": "Existed_Raid", 00:10:30.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.827 "strip_size_kb": 0, 00:10:30.827 "state": "configuring", 00:10:30.827 "raid_level": "raid1", 00:10:30.827 "superblock": false, 00:10:30.827 "num_base_bdevs": 4, 00:10:30.827 "num_base_bdevs_discovered": 2, 00:10:30.827 "num_base_bdevs_operational": 4, 00:10:30.827 "base_bdevs_list": [ 00:10:30.827 { 00:10:30.827 "name": "BaseBdev1", 00:10:30.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.827 "is_configured": false, 00:10:30.827 "data_offset": 0, 00:10:30.827 "data_size": 0 00:10:30.827 }, 00:10:30.827 { 00:10:30.827 "name": null, 00:10:30.827 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:30.827 "is_configured": false, 00:10:30.827 "data_offset": 0, 00:10:30.827 "data_size": 65536 00:10:30.827 }, 00:10:30.827 { 00:10:30.827 "name": "BaseBdev3", 00:10:30.827 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:30.827 "is_configured": true, 00:10:30.827 "data_offset": 0, 00:10:30.827 "data_size": 65536 00:10:30.827 }, 00:10:30.827 { 00:10:30.827 "name": "BaseBdev4", 00:10:30.827 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:30.827 "is_configured": true, 00:10:30.828 "data_offset": 0, 00:10:30.828 "data_size": 65536 00:10:30.828 } 00:10:30.828 ] 00:10:30.828 }' 00:10:30.828 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.828 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 [2024-11-21 04:56:47.764985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.088 BaseBdev1 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 [ 00:10:31.088 { 00:10:31.088 "name": "BaseBdev1", 00:10:31.088 "aliases": [ 00:10:31.088 "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0" 00:10:31.088 ], 00:10:31.088 "product_name": "Malloc disk", 00:10:31.088 "block_size": 512, 00:10:31.088 "num_blocks": 65536, 00:10:31.088 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:31.088 "assigned_rate_limits": { 00:10:31.088 "rw_ios_per_sec": 0, 00:10:31.088 "rw_mbytes_per_sec": 0, 00:10:31.088 "r_mbytes_per_sec": 0, 00:10:31.088 "w_mbytes_per_sec": 0 00:10:31.088 }, 00:10:31.088 "claimed": true, 00:10:31.088 "claim_type": "exclusive_write", 00:10:31.088 "zoned": false, 00:10:31.088 "supported_io_types": { 00:10:31.088 "read": true, 00:10:31.088 "write": true, 00:10:31.088 "unmap": true, 00:10:31.088 "flush": true, 00:10:31.088 "reset": true, 00:10:31.088 "nvme_admin": false, 00:10:31.088 "nvme_io": false, 00:10:31.088 "nvme_io_md": false, 00:10:31.088 "write_zeroes": true, 00:10:31.088 "zcopy": true, 00:10:31.088 "get_zone_info": false, 00:10:31.088 "zone_management": false, 00:10:31.088 "zone_append": false, 00:10:31.088 "compare": false, 00:10:31.088 "compare_and_write": false, 00:10:31.088 "abort": true, 00:10:31.088 "seek_hole": false, 00:10:31.088 "seek_data": false, 00:10:31.088 "copy": true, 00:10:31.088 "nvme_iov_md": false 00:10:31.088 }, 00:10:31.088 "memory_domains": [ 00:10:31.088 { 00:10:31.088 "dma_device_id": "system", 00:10:31.088 "dma_device_type": 1 00:10:31.088 }, 00:10:31.088 { 00:10:31.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.088 "dma_device_type": 2 00:10:31.088 } 00:10:31.088 ], 00:10:31.088 "driver_specific": {} 00:10:31.088 } 00:10:31.088 ] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.088 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.348 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.348 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.348 "name": "Existed_Raid", 00:10:31.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.348 "strip_size_kb": 0, 00:10:31.348 "state": "configuring", 00:10:31.348 "raid_level": "raid1", 00:10:31.348 "superblock": false, 00:10:31.348 "num_base_bdevs": 4, 00:10:31.348 "num_base_bdevs_discovered": 3, 00:10:31.348 "num_base_bdevs_operational": 4, 00:10:31.348 "base_bdevs_list": [ 00:10:31.348 { 00:10:31.348 "name": "BaseBdev1", 00:10:31.348 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:31.348 "is_configured": true, 00:10:31.348 "data_offset": 0, 00:10:31.348 "data_size": 65536 00:10:31.348 }, 00:10:31.348 { 00:10:31.348 "name": null, 00:10:31.348 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:31.348 "is_configured": false, 00:10:31.348 "data_offset": 0, 00:10:31.348 "data_size": 65536 00:10:31.348 }, 00:10:31.348 { 00:10:31.348 "name": "BaseBdev3", 00:10:31.348 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:31.348 "is_configured": true, 00:10:31.348 "data_offset": 0, 00:10:31.348 "data_size": 65536 00:10:31.348 }, 00:10:31.348 { 00:10:31.348 "name": "BaseBdev4", 00:10:31.348 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:31.348 "is_configured": true, 00:10:31.348 "data_offset": 0, 00:10:31.348 "data_size": 65536 00:10:31.348 } 00:10:31.348 ] 00:10:31.348 }' 00:10:31.348 04:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.348 04:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.609 [2024-11-21 04:56:48.316127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.609 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.869 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.869 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.869 "name": "Existed_Raid", 00:10:31.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.869 "strip_size_kb": 0, 00:10:31.869 "state": "configuring", 00:10:31.869 "raid_level": "raid1", 00:10:31.869 "superblock": false, 00:10:31.869 "num_base_bdevs": 4, 00:10:31.869 "num_base_bdevs_discovered": 2, 00:10:31.869 "num_base_bdevs_operational": 4, 00:10:31.869 "base_bdevs_list": [ 00:10:31.869 { 00:10:31.869 "name": "BaseBdev1", 00:10:31.869 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:31.869 "is_configured": true, 00:10:31.869 "data_offset": 0, 00:10:31.869 "data_size": 65536 00:10:31.869 }, 00:10:31.869 { 00:10:31.869 "name": null, 00:10:31.869 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:31.869 "is_configured": false, 00:10:31.869 "data_offset": 0, 00:10:31.869 "data_size": 65536 00:10:31.869 }, 00:10:31.869 { 00:10:31.869 "name": null, 00:10:31.869 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:31.869 "is_configured": false, 00:10:31.869 "data_offset": 0, 00:10:31.869 "data_size": 65536 00:10:31.869 }, 00:10:31.869 { 00:10:31.869 "name": "BaseBdev4", 00:10:31.869 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:31.869 "is_configured": true, 00:10:31.869 "data_offset": 0, 00:10:31.869 "data_size": 65536 00:10:31.869 } 00:10:31.869 ] 00:10:31.869 }' 00:10:31.869 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.869 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.129 [2024-11-21 04:56:48.807292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.129 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.389 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.389 "name": "Existed_Raid", 00:10:32.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.389 "strip_size_kb": 0, 00:10:32.389 "state": "configuring", 00:10:32.389 "raid_level": "raid1", 00:10:32.389 "superblock": false, 00:10:32.389 "num_base_bdevs": 4, 00:10:32.389 "num_base_bdevs_discovered": 3, 00:10:32.389 "num_base_bdevs_operational": 4, 00:10:32.389 "base_bdevs_list": [ 00:10:32.389 { 00:10:32.389 "name": "BaseBdev1", 00:10:32.389 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:32.389 "is_configured": true, 00:10:32.389 "data_offset": 0, 00:10:32.389 "data_size": 65536 00:10:32.389 }, 00:10:32.389 { 00:10:32.389 "name": null, 00:10:32.389 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:32.389 "is_configured": false, 00:10:32.389 "data_offset": 0, 00:10:32.389 "data_size": 65536 00:10:32.389 }, 00:10:32.389 { 00:10:32.389 "name": "BaseBdev3", 00:10:32.389 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:32.389 "is_configured": true, 00:10:32.389 "data_offset": 0, 00:10:32.389 "data_size": 65536 00:10:32.389 }, 00:10:32.389 { 00:10:32.389 "name": "BaseBdev4", 00:10:32.389 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:32.389 "is_configured": true, 00:10:32.389 "data_offset": 0, 00:10:32.389 "data_size": 65536 00:10:32.389 } 00:10:32.389 ] 00:10:32.389 }' 00:10:32.389 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.389 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.649 [2024-11-21 04:56:49.322418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.649 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.910 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.910 "name": "Existed_Raid", 00:10:32.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.910 "strip_size_kb": 0, 00:10:32.910 "state": "configuring", 00:10:32.910 "raid_level": "raid1", 00:10:32.910 "superblock": false, 00:10:32.910 "num_base_bdevs": 4, 00:10:32.910 "num_base_bdevs_discovered": 2, 00:10:32.910 "num_base_bdevs_operational": 4, 00:10:32.910 "base_bdevs_list": [ 00:10:32.910 { 00:10:32.910 "name": null, 00:10:32.910 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:32.910 "is_configured": false, 00:10:32.910 "data_offset": 0, 00:10:32.910 "data_size": 65536 00:10:32.910 }, 00:10:32.910 { 00:10:32.910 "name": null, 00:10:32.910 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:32.910 "is_configured": false, 00:10:32.910 "data_offset": 0, 00:10:32.910 "data_size": 65536 00:10:32.910 }, 00:10:32.910 { 00:10:32.910 "name": "BaseBdev3", 00:10:32.910 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:32.910 "is_configured": true, 00:10:32.910 "data_offset": 0, 00:10:32.910 "data_size": 65536 00:10:32.910 }, 00:10:32.910 { 00:10:32.910 "name": "BaseBdev4", 00:10:32.910 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:32.910 "is_configured": true, 00:10:32.910 "data_offset": 0, 00:10:32.910 "data_size": 65536 00:10:32.910 } 00:10:32.910 ] 00:10:32.910 }' 00:10:32.910 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.910 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 [2024-11-21 04:56:49.855947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.169 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.428 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.428 "name": "Existed_Raid", 00:10:33.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.428 "strip_size_kb": 0, 00:10:33.428 "state": "configuring", 00:10:33.428 "raid_level": "raid1", 00:10:33.428 "superblock": false, 00:10:33.428 "num_base_bdevs": 4, 00:10:33.428 "num_base_bdevs_discovered": 3, 00:10:33.428 "num_base_bdevs_operational": 4, 00:10:33.428 "base_bdevs_list": [ 00:10:33.428 { 00:10:33.428 "name": null, 00:10:33.428 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:33.428 "is_configured": false, 00:10:33.428 "data_offset": 0, 00:10:33.428 "data_size": 65536 00:10:33.428 }, 00:10:33.428 { 00:10:33.428 "name": "BaseBdev2", 00:10:33.428 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:33.428 "is_configured": true, 00:10:33.428 "data_offset": 0, 00:10:33.428 "data_size": 65536 00:10:33.428 }, 00:10:33.428 { 00:10:33.428 "name": "BaseBdev3", 00:10:33.428 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:33.428 "is_configured": true, 00:10:33.428 "data_offset": 0, 00:10:33.428 "data_size": 65536 00:10:33.428 }, 00:10:33.428 { 00:10:33.428 "name": "BaseBdev4", 00:10:33.428 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:33.428 "is_configured": true, 00:10:33.428 "data_offset": 0, 00:10:33.428 "data_size": 65536 00:10:33.428 } 00:10:33.428 ] 00:10:33.428 }' 00:10:33.428 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.428 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6a37394-6e3c-4c75-8142-23e5bc9cc7b0 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.687 [2024-11-21 04:56:50.409875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.687 [2024-11-21 04:56:50.409928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:33.687 [2024-11-21 04:56:50.409942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:33.687 [2024-11-21 04:56:50.410208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.687 [2024-11-21 04:56:50.410331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:33.687 [2024-11-21 04:56:50.410341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:33.687 [2024-11-21 04:56:50.410523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.687 NewBaseBdev 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.687 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.688 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.688 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.688 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.688 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.947 [ 00:10:33.947 { 00:10:33.947 "name": "NewBaseBdev", 00:10:33.947 "aliases": [ 00:10:33.947 "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0" 00:10:33.947 ], 00:10:33.947 "product_name": "Malloc disk", 00:10:33.947 "block_size": 512, 00:10:33.947 "num_blocks": 65536, 00:10:33.947 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:33.947 "assigned_rate_limits": { 00:10:33.947 "rw_ios_per_sec": 0, 00:10:33.947 "rw_mbytes_per_sec": 0, 00:10:33.947 "r_mbytes_per_sec": 0, 00:10:33.947 "w_mbytes_per_sec": 0 00:10:33.947 }, 00:10:33.947 "claimed": true, 00:10:33.947 "claim_type": "exclusive_write", 00:10:33.947 "zoned": false, 00:10:33.947 "supported_io_types": { 00:10:33.947 "read": true, 00:10:33.947 "write": true, 00:10:33.947 "unmap": true, 00:10:33.947 "flush": true, 00:10:33.947 "reset": true, 00:10:33.947 "nvme_admin": false, 00:10:33.947 "nvme_io": false, 00:10:33.947 "nvme_io_md": false, 00:10:33.947 "write_zeroes": true, 00:10:33.947 "zcopy": true, 00:10:33.947 "get_zone_info": false, 00:10:33.947 "zone_management": false, 00:10:33.947 "zone_append": false, 00:10:33.947 "compare": false, 00:10:33.947 "compare_and_write": false, 00:10:33.947 "abort": true, 00:10:33.947 "seek_hole": false, 00:10:33.947 "seek_data": false, 00:10:33.947 "copy": true, 00:10:33.947 "nvme_iov_md": false 00:10:33.947 }, 00:10:33.947 "memory_domains": [ 00:10:33.947 { 00:10:33.947 "dma_device_id": "system", 00:10:33.947 "dma_device_type": 1 00:10:33.947 }, 00:10:33.947 { 00:10:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.947 "dma_device_type": 2 00:10:33.947 } 00:10:33.947 ], 00:10:33.947 "driver_specific": {} 00:10:33.947 } 00:10:33.947 ] 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.947 "name": "Existed_Raid", 00:10:33.947 "uuid": "baedcc5a-f87d-49a5-8808-2a3885a9017f", 00:10:33.947 "strip_size_kb": 0, 00:10:33.947 "state": "online", 00:10:33.947 "raid_level": "raid1", 00:10:33.947 "superblock": false, 00:10:33.947 "num_base_bdevs": 4, 00:10:33.947 "num_base_bdevs_discovered": 4, 00:10:33.947 "num_base_bdevs_operational": 4, 00:10:33.947 "base_bdevs_list": [ 00:10:33.947 { 00:10:33.947 "name": "NewBaseBdev", 00:10:33.947 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:33.947 "is_configured": true, 00:10:33.947 "data_offset": 0, 00:10:33.947 "data_size": 65536 00:10:33.947 }, 00:10:33.947 { 00:10:33.947 "name": "BaseBdev2", 00:10:33.947 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:33.947 "is_configured": true, 00:10:33.947 "data_offset": 0, 00:10:33.947 "data_size": 65536 00:10:33.947 }, 00:10:33.947 { 00:10:33.947 "name": "BaseBdev3", 00:10:33.947 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:33.947 "is_configured": true, 00:10:33.947 "data_offset": 0, 00:10:33.947 "data_size": 65536 00:10:33.947 }, 00:10:33.947 { 00:10:33.947 "name": "BaseBdev4", 00:10:33.947 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:33.947 "is_configured": true, 00:10:33.947 "data_offset": 0, 00:10:33.947 "data_size": 65536 00:10:33.947 } 00:10:33.947 ] 00:10:33.947 }' 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.947 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 [2024-11-21 04:56:50.877448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.208 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.208 "name": "Existed_Raid", 00:10:34.208 "aliases": [ 00:10:34.208 "baedcc5a-f87d-49a5-8808-2a3885a9017f" 00:10:34.208 ], 00:10:34.208 "product_name": "Raid Volume", 00:10:34.208 "block_size": 512, 00:10:34.208 "num_blocks": 65536, 00:10:34.208 "uuid": "baedcc5a-f87d-49a5-8808-2a3885a9017f", 00:10:34.208 "assigned_rate_limits": { 00:10:34.208 "rw_ios_per_sec": 0, 00:10:34.208 "rw_mbytes_per_sec": 0, 00:10:34.208 "r_mbytes_per_sec": 0, 00:10:34.208 "w_mbytes_per_sec": 0 00:10:34.208 }, 00:10:34.208 "claimed": false, 00:10:34.208 "zoned": false, 00:10:34.208 "supported_io_types": { 00:10:34.208 "read": true, 00:10:34.208 "write": true, 00:10:34.208 "unmap": false, 00:10:34.208 "flush": false, 00:10:34.208 "reset": true, 00:10:34.208 "nvme_admin": false, 00:10:34.208 "nvme_io": false, 00:10:34.208 "nvme_io_md": false, 00:10:34.208 "write_zeroes": true, 00:10:34.208 "zcopy": false, 00:10:34.208 "get_zone_info": false, 00:10:34.208 "zone_management": false, 00:10:34.208 "zone_append": false, 00:10:34.208 "compare": false, 00:10:34.208 "compare_and_write": false, 00:10:34.208 "abort": false, 00:10:34.208 "seek_hole": false, 00:10:34.208 "seek_data": false, 00:10:34.208 "copy": false, 00:10:34.208 "nvme_iov_md": false 00:10:34.208 }, 00:10:34.208 "memory_domains": [ 00:10:34.208 { 00:10:34.208 "dma_device_id": "system", 00:10:34.208 "dma_device_type": 1 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.208 "dma_device_type": 2 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "system", 00:10:34.208 "dma_device_type": 1 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.208 "dma_device_type": 2 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "system", 00:10:34.208 "dma_device_type": 1 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.208 "dma_device_type": 2 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "system", 00:10:34.208 "dma_device_type": 1 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.208 "dma_device_type": 2 00:10:34.208 } 00:10:34.208 ], 00:10:34.208 "driver_specific": { 00:10:34.208 "raid": { 00:10:34.208 "uuid": "baedcc5a-f87d-49a5-8808-2a3885a9017f", 00:10:34.208 "strip_size_kb": 0, 00:10:34.208 "state": "online", 00:10:34.208 "raid_level": "raid1", 00:10:34.208 "superblock": false, 00:10:34.208 "num_base_bdevs": 4, 00:10:34.208 "num_base_bdevs_discovered": 4, 00:10:34.208 "num_base_bdevs_operational": 4, 00:10:34.208 "base_bdevs_list": [ 00:10:34.208 { 00:10:34.208 "name": "NewBaseBdev", 00:10:34.208 "uuid": "c6a37394-6e3c-4c75-8142-23e5bc9cc7b0", 00:10:34.208 "is_configured": true, 00:10:34.208 "data_offset": 0, 00:10:34.208 "data_size": 65536 00:10:34.208 }, 00:10:34.208 { 00:10:34.209 "name": "BaseBdev2", 00:10:34.209 "uuid": "685745f9-82ba-42f6-9d25-60d6046ebd49", 00:10:34.209 "is_configured": true, 00:10:34.209 "data_offset": 0, 00:10:34.209 "data_size": 65536 00:10:34.209 }, 00:10:34.209 { 00:10:34.209 "name": "BaseBdev3", 00:10:34.209 "uuid": "bd87598b-899d-4ea2-8dd4-b2e51ea602ba", 00:10:34.209 "is_configured": true, 00:10:34.209 "data_offset": 0, 00:10:34.209 "data_size": 65536 00:10:34.209 }, 00:10:34.209 { 00:10:34.209 "name": "BaseBdev4", 00:10:34.209 "uuid": "ba24dae0-fdc1-44d5-97bf-070b7ff120a6", 00:10:34.209 "is_configured": true, 00:10:34.209 "data_offset": 0, 00:10:34.209 "data_size": 65536 00:10:34.209 } 00:10:34.209 ] 00:10:34.209 } 00:10:34.209 } 00:10:34.209 }' 00:10:34.209 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.468 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.468 BaseBdev2 00:10:34.468 BaseBdev3 00:10:34.468 BaseBdev4' 00:10:34.468 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.468 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.728 [2024-11-21 04:56:51.220511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.728 [2024-11-21 04:56:51.220586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.728 [2024-11-21 04:56:51.220678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.728 [2024-11-21 04:56:51.220942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.728 [2024-11-21 04:56:51.220958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84133 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84133 ']' 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84133 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84133 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.728 killing process with pid 84133 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84133' 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 84133 00:10:34.728 [2024-11-21 04:56:51.270999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.728 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 84133 00:10:34.728 [2024-11-21 04:56:51.310874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.987 ************************************ 00:10:34.987 END TEST raid_state_function_test 00:10:34.987 ************************************ 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.988 00:10:34.988 real 0m9.836s 00:10:34.988 user 0m16.840s 00:10:34.988 sys 0m2.090s 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.988 04:56:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:34.988 04:56:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.988 04:56:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.988 04:56:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.988 ************************************ 00:10:34.988 START TEST raid_state_function_test_sb 00:10:34.988 ************************************ 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84786 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84786' 00:10:34.988 Process raid pid: 84786 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84786 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84786 ']' 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.988 04:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.988 [2024-11-21 04:56:51.685786] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:34.988 [2024-11-21 04:56:51.685900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.248 [2024-11-21 04:56:51.859333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.248 [2024-11-21 04:56:51.884105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.248 [2024-11-21 04:56:51.925287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.248 [2024-11-21 04:56:51.925330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.817 [2024-11-21 04:56:52.513840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.817 [2024-11-21 04:56:52.513948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.817 [2024-11-21 04:56:52.513962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.817 [2024-11-21 04:56:52.513971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.817 [2024-11-21 04:56:52.513977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.817 [2024-11-21 04:56:52.513988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.817 [2024-11-21 04:56:52.513996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.817 [2024-11-21 04:56:52.514005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.817 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.077 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.077 "name": "Existed_Raid", 00:10:36.077 "uuid": "3b3a9c32-f111-453c-9a85-c29705074d9c", 00:10:36.077 "strip_size_kb": 0, 00:10:36.077 "state": "configuring", 00:10:36.077 "raid_level": "raid1", 00:10:36.077 "superblock": true, 00:10:36.077 "num_base_bdevs": 4, 00:10:36.077 "num_base_bdevs_discovered": 0, 00:10:36.077 "num_base_bdevs_operational": 4, 00:10:36.077 "base_bdevs_list": [ 00:10:36.077 { 00:10:36.077 "name": "BaseBdev1", 00:10:36.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.077 "is_configured": false, 00:10:36.077 "data_offset": 0, 00:10:36.077 "data_size": 0 00:10:36.077 }, 00:10:36.077 { 00:10:36.077 "name": "BaseBdev2", 00:10:36.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.077 "is_configured": false, 00:10:36.077 "data_offset": 0, 00:10:36.077 "data_size": 0 00:10:36.077 }, 00:10:36.077 { 00:10:36.077 "name": "BaseBdev3", 00:10:36.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.077 "is_configured": false, 00:10:36.077 "data_offset": 0, 00:10:36.077 "data_size": 0 00:10:36.077 }, 00:10:36.077 { 00:10:36.077 "name": "BaseBdev4", 00:10:36.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.077 "is_configured": false, 00:10:36.077 "data_offset": 0, 00:10:36.077 "data_size": 0 00:10:36.077 } 00:10:36.077 ] 00:10:36.077 }' 00:10:36.077 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.077 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 [2024-11-21 04:56:52.941031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.337 [2024-11-21 04:56:52.941167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 [2024-11-21 04:56:52.953019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.337 [2024-11-21 04:56:52.953114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.337 [2024-11-21 04:56:52.953168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.337 [2024-11-21 04:56:52.953224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.337 [2024-11-21 04:56:52.953268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.337 [2024-11-21 04:56:52.953311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.337 [2024-11-21 04:56:52.953350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.337 [2024-11-21 04:56:52.953392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 [2024-11-21 04:56:52.974593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.337 BaseBdev1 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.337 04:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.337 [ 00:10:36.337 { 00:10:36.337 "name": "BaseBdev1", 00:10:36.337 "aliases": [ 00:10:36.337 "5e070b60-a519-4d88-8f31-f30c275ba79d" 00:10:36.337 ], 00:10:36.337 "product_name": "Malloc disk", 00:10:36.337 "block_size": 512, 00:10:36.337 "num_blocks": 65536, 00:10:36.337 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:36.338 "assigned_rate_limits": { 00:10:36.338 "rw_ios_per_sec": 0, 00:10:36.338 "rw_mbytes_per_sec": 0, 00:10:36.338 "r_mbytes_per_sec": 0, 00:10:36.338 "w_mbytes_per_sec": 0 00:10:36.338 }, 00:10:36.338 "claimed": true, 00:10:36.338 "claim_type": "exclusive_write", 00:10:36.338 "zoned": false, 00:10:36.338 "supported_io_types": { 00:10:36.338 "read": true, 00:10:36.338 "write": true, 00:10:36.338 "unmap": true, 00:10:36.338 "flush": true, 00:10:36.338 "reset": true, 00:10:36.338 "nvme_admin": false, 00:10:36.338 "nvme_io": false, 00:10:36.338 "nvme_io_md": false, 00:10:36.338 "write_zeroes": true, 00:10:36.338 "zcopy": true, 00:10:36.338 "get_zone_info": false, 00:10:36.338 "zone_management": false, 00:10:36.338 "zone_append": false, 00:10:36.338 "compare": false, 00:10:36.338 "compare_and_write": false, 00:10:36.338 "abort": true, 00:10:36.338 "seek_hole": false, 00:10:36.338 "seek_data": false, 00:10:36.338 "copy": true, 00:10:36.338 "nvme_iov_md": false 00:10:36.338 }, 00:10:36.338 "memory_domains": [ 00:10:36.338 { 00:10:36.338 "dma_device_id": "system", 00:10:36.338 "dma_device_type": 1 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.338 "dma_device_type": 2 00:10:36.338 } 00:10:36.338 ], 00:10:36.338 "driver_specific": {} 00:10:36.338 } 00:10:36.338 ] 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.338 "name": "Existed_Raid", 00:10:36.338 "uuid": "1c61efcc-8b86-4daa-be5a-dbdd85a5a197", 00:10:36.338 "strip_size_kb": 0, 00:10:36.338 "state": "configuring", 00:10:36.338 "raid_level": "raid1", 00:10:36.338 "superblock": true, 00:10:36.338 "num_base_bdevs": 4, 00:10:36.338 "num_base_bdevs_discovered": 1, 00:10:36.338 "num_base_bdevs_operational": 4, 00:10:36.338 "base_bdevs_list": [ 00:10:36.338 { 00:10:36.338 "name": "BaseBdev1", 00:10:36.338 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:36.338 "is_configured": true, 00:10:36.338 "data_offset": 2048, 00:10:36.338 "data_size": 63488 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "name": "BaseBdev2", 00:10:36.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.338 "is_configured": false, 00:10:36.338 "data_offset": 0, 00:10:36.338 "data_size": 0 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "name": "BaseBdev3", 00:10:36.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.338 "is_configured": false, 00:10:36.338 "data_offset": 0, 00:10:36.338 "data_size": 0 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "name": "BaseBdev4", 00:10:36.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.338 "is_configured": false, 00:10:36.338 "data_offset": 0, 00:10:36.338 "data_size": 0 00:10:36.338 } 00:10:36.338 ] 00:10:36.338 }' 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.338 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.907 [2024-11-21 04:56:53.429879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.907 [2024-11-21 04:56:53.430016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.907 [2024-11-21 04:56:53.441867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.907 [2024-11-21 04:56:53.443868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.907 [2024-11-21 04:56:53.443958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.907 [2024-11-21 04:56:53.443985] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.907 [2024-11-21 04:56:53.443997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.907 [2024-11-21 04:56:53.444005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.907 [2024-11-21 04:56:53.444014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.907 "name": "Existed_Raid", 00:10:36.907 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:36.907 "strip_size_kb": 0, 00:10:36.907 "state": "configuring", 00:10:36.907 "raid_level": "raid1", 00:10:36.907 "superblock": true, 00:10:36.907 "num_base_bdevs": 4, 00:10:36.907 "num_base_bdevs_discovered": 1, 00:10:36.907 "num_base_bdevs_operational": 4, 00:10:36.907 "base_bdevs_list": [ 00:10:36.907 { 00:10:36.907 "name": "BaseBdev1", 00:10:36.907 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:36.907 "is_configured": true, 00:10:36.907 "data_offset": 2048, 00:10:36.907 "data_size": 63488 00:10:36.907 }, 00:10:36.907 { 00:10:36.907 "name": "BaseBdev2", 00:10:36.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.907 "is_configured": false, 00:10:36.907 "data_offset": 0, 00:10:36.907 "data_size": 0 00:10:36.907 }, 00:10:36.907 { 00:10:36.907 "name": "BaseBdev3", 00:10:36.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.907 "is_configured": false, 00:10:36.907 "data_offset": 0, 00:10:36.907 "data_size": 0 00:10:36.907 }, 00:10:36.907 { 00:10:36.907 "name": "BaseBdev4", 00:10:36.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.907 "is_configured": false, 00:10:36.907 "data_offset": 0, 00:10:36.907 "data_size": 0 00:10:36.907 } 00:10:36.907 ] 00:10:36.907 }' 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.907 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.476 [2024-11-21 04:56:53.960220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.476 BaseBdev2 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.476 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.477 [ 00:10:37.477 { 00:10:37.477 "name": "BaseBdev2", 00:10:37.477 "aliases": [ 00:10:37.477 "c3b5da08-bc96-47a1-8167-31618b5319e3" 00:10:37.477 ], 00:10:37.477 "product_name": "Malloc disk", 00:10:37.477 "block_size": 512, 00:10:37.477 "num_blocks": 65536, 00:10:37.477 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:37.477 "assigned_rate_limits": { 00:10:37.477 "rw_ios_per_sec": 0, 00:10:37.477 "rw_mbytes_per_sec": 0, 00:10:37.477 "r_mbytes_per_sec": 0, 00:10:37.477 "w_mbytes_per_sec": 0 00:10:37.477 }, 00:10:37.477 "claimed": true, 00:10:37.477 "claim_type": "exclusive_write", 00:10:37.477 "zoned": false, 00:10:37.477 "supported_io_types": { 00:10:37.477 "read": true, 00:10:37.477 "write": true, 00:10:37.477 "unmap": true, 00:10:37.477 "flush": true, 00:10:37.477 "reset": true, 00:10:37.477 "nvme_admin": false, 00:10:37.477 "nvme_io": false, 00:10:37.477 "nvme_io_md": false, 00:10:37.477 "write_zeroes": true, 00:10:37.477 "zcopy": true, 00:10:37.477 "get_zone_info": false, 00:10:37.477 "zone_management": false, 00:10:37.477 "zone_append": false, 00:10:37.477 "compare": false, 00:10:37.477 "compare_and_write": false, 00:10:37.477 "abort": true, 00:10:37.477 "seek_hole": false, 00:10:37.477 "seek_data": false, 00:10:37.477 "copy": true, 00:10:37.477 "nvme_iov_md": false 00:10:37.477 }, 00:10:37.477 "memory_domains": [ 00:10:37.477 { 00:10:37.477 "dma_device_id": "system", 00:10:37.477 "dma_device_type": 1 00:10:37.477 }, 00:10:37.477 { 00:10:37.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.477 "dma_device_type": 2 00:10:37.477 } 00:10:37.477 ], 00:10:37.477 "driver_specific": {} 00:10:37.477 } 00:10:37.477 ] 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.477 04:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.477 "name": "Existed_Raid", 00:10:37.477 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:37.477 "strip_size_kb": 0, 00:10:37.477 "state": "configuring", 00:10:37.477 "raid_level": "raid1", 00:10:37.477 "superblock": true, 00:10:37.477 "num_base_bdevs": 4, 00:10:37.477 "num_base_bdevs_discovered": 2, 00:10:37.477 "num_base_bdevs_operational": 4, 00:10:37.477 "base_bdevs_list": [ 00:10:37.477 { 00:10:37.477 "name": "BaseBdev1", 00:10:37.477 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:37.477 "is_configured": true, 00:10:37.477 "data_offset": 2048, 00:10:37.477 "data_size": 63488 00:10:37.477 }, 00:10:37.477 { 00:10:37.477 "name": "BaseBdev2", 00:10:37.477 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:37.477 "is_configured": true, 00:10:37.477 "data_offset": 2048, 00:10:37.477 "data_size": 63488 00:10:37.477 }, 00:10:37.477 { 00:10:37.477 "name": "BaseBdev3", 00:10:37.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.477 "is_configured": false, 00:10:37.477 "data_offset": 0, 00:10:37.477 "data_size": 0 00:10:37.477 }, 00:10:37.477 { 00:10:37.477 "name": "BaseBdev4", 00:10:37.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.477 "is_configured": false, 00:10:37.477 "data_offset": 0, 00:10:37.477 "data_size": 0 00:10:37.477 } 00:10:37.477 ] 00:10:37.477 }' 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.477 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 [2024-11-21 04:56:54.500611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.046 BaseBdev3 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 [ 00:10:38.046 { 00:10:38.046 "name": "BaseBdev3", 00:10:38.046 "aliases": [ 00:10:38.046 "2027a6ef-2e43-4dcd-b711-d0c478b46efd" 00:10:38.046 ], 00:10:38.046 "product_name": "Malloc disk", 00:10:38.046 "block_size": 512, 00:10:38.046 "num_blocks": 65536, 00:10:38.046 "uuid": "2027a6ef-2e43-4dcd-b711-d0c478b46efd", 00:10:38.046 "assigned_rate_limits": { 00:10:38.046 "rw_ios_per_sec": 0, 00:10:38.046 "rw_mbytes_per_sec": 0, 00:10:38.046 "r_mbytes_per_sec": 0, 00:10:38.046 "w_mbytes_per_sec": 0 00:10:38.046 }, 00:10:38.046 "claimed": true, 00:10:38.046 "claim_type": "exclusive_write", 00:10:38.046 "zoned": false, 00:10:38.046 "supported_io_types": { 00:10:38.046 "read": true, 00:10:38.046 "write": true, 00:10:38.046 "unmap": true, 00:10:38.046 "flush": true, 00:10:38.046 "reset": true, 00:10:38.046 "nvme_admin": false, 00:10:38.046 "nvme_io": false, 00:10:38.046 "nvme_io_md": false, 00:10:38.046 "write_zeroes": true, 00:10:38.046 "zcopy": true, 00:10:38.046 "get_zone_info": false, 00:10:38.046 "zone_management": false, 00:10:38.046 "zone_append": false, 00:10:38.046 "compare": false, 00:10:38.046 "compare_and_write": false, 00:10:38.046 "abort": true, 00:10:38.046 "seek_hole": false, 00:10:38.046 "seek_data": false, 00:10:38.046 "copy": true, 00:10:38.046 "nvme_iov_md": false 00:10:38.046 }, 00:10:38.046 "memory_domains": [ 00:10:38.046 { 00:10:38.046 "dma_device_id": "system", 00:10:38.046 "dma_device_type": 1 00:10:38.046 }, 00:10:38.046 { 00:10:38.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.046 "dma_device_type": 2 00:10:38.046 } 00:10:38.046 ], 00:10:38.046 "driver_specific": {} 00:10:38.046 } 00:10:38.046 ] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.046 "name": "Existed_Raid", 00:10:38.046 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:38.046 "strip_size_kb": 0, 00:10:38.046 "state": "configuring", 00:10:38.046 "raid_level": "raid1", 00:10:38.046 "superblock": true, 00:10:38.046 "num_base_bdevs": 4, 00:10:38.046 "num_base_bdevs_discovered": 3, 00:10:38.046 "num_base_bdevs_operational": 4, 00:10:38.046 "base_bdevs_list": [ 00:10:38.046 { 00:10:38.046 "name": "BaseBdev1", 00:10:38.046 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:38.046 "is_configured": true, 00:10:38.046 "data_offset": 2048, 00:10:38.046 "data_size": 63488 00:10:38.046 }, 00:10:38.046 { 00:10:38.046 "name": "BaseBdev2", 00:10:38.046 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:38.046 "is_configured": true, 00:10:38.046 "data_offset": 2048, 00:10:38.046 "data_size": 63488 00:10:38.046 }, 00:10:38.046 { 00:10:38.046 "name": "BaseBdev3", 00:10:38.046 "uuid": "2027a6ef-2e43-4dcd-b711-d0c478b46efd", 00:10:38.046 "is_configured": true, 00:10:38.046 "data_offset": 2048, 00:10:38.046 "data_size": 63488 00:10:38.046 }, 00:10:38.046 { 00:10:38.046 "name": "BaseBdev4", 00:10:38.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.046 "is_configured": false, 00:10:38.046 "data_offset": 0, 00:10:38.046 "data_size": 0 00:10:38.046 } 00:10:38.046 ] 00:10:38.046 }' 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.046 04:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.305 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.305 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.305 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.305 [2024-11-21 04:56:55.035168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.305 [2024-11-21 04:56:55.035413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:38.305 [2024-11-21 04:56:55.035431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.563 BaseBdev4 00:10:38.563 [2024-11-21 04:56:55.035786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:38.563 [2024-11-21 04:56:55.035960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:38.563 [2024-11-21 04:56:55.035977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:38.563 [2024-11-21 04:56:55.036129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.563 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.563 [ 00:10:38.563 { 00:10:38.563 "name": "BaseBdev4", 00:10:38.563 "aliases": [ 00:10:38.563 "cd0377b6-eedd-4370-a17f-69c71087fd4d" 00:10:38.563 ], 00:10:38.563 "product_name": "Malloc disk", 00:10:38.563 "block_size": 512, 00:10:38.563 "num_blocks": 65536, 00:10:38.563 "uuid": "cd0377b6-eedd-4370-a17f-69c71087fd4d", 00:10:38.563 "assigned_rate_limits": { 00:10:38.563 "rw_ios_per_sec": 0, 00:10:38.563 "rw_mbytes_per_sec": 0, 00:10:38.563 "r_mbytes_per_sec": 0, 00:10:38.563 "w_mbytes_per_sec": 0 00:10:38.563 }, 00:10:38.563 "claimed": true, 00:10:38.563 "claim_type": "exclusive_write", 00:10:38.563 "zoned": false, 00:10:38.563 "supported_io_types": { 00:10:38.563 "read": true, 00:10:38.563 "write": true, 00:10:38.563 "unmap": true, 00:10:38.563 "flush": true, 00:10:38.563 "reset": true, 00:10:38.563 "nvme_admin": false, 00:10:38.563 "nvme_io": false, 00:10:38.563 "nvme_io_md": false, 00:10:38.563 "write_zeroes": true, 00:10:38.563 "zcopy": true, 00:10:38.563 "get_zone_info": false, 00:10:38.563 "zone_management": false, 00:10:38.563 "zone_append": false, 00:10:38.563 "compare": false, 00:10:38.563 "compare_and_write": false, 00:10:38.563 "abort": true, 00:10:38.563 "seek_hole": false, 00:10:38.563 "seek_data": false, 00:10:38.563 "copy": true, 00:10:38.563 "nvme_iov_md": false 00:10:38.563 }, 00:10:38.563 "memory_domains": [ 00:10:38.563 { 00:10:38.563 "dma_device_id": "system", 00:10:38.564 "dma_device_type": 1 00:10:38.564 }, 00:10:38.564 { 00:10:38.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.564 "dma_device_type": 2 00:10:38.564 } 00:10:38.564 ], 00:10:38.564 "driver_specific": {} 00:10:38.564 } 00:10:38.564 ] 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.564 "name": "Existed_Raid", 00:10:38.564 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:38.564 "strip_size_kb": 0, 00:10:38.564 "state": "online", 00:10:38.564 "raid_level": "raid1", 00:10:38.564 "superblock": true, 00:10:38.564 "num_base_bdevs": 4, 00:10:38.564 "num_base_bdevs_discovered": 4, 00:10:38.564 "num_base_bdevs_operational": 4, 00:10:38.564 "base_bdevs_list": [ 00:10:38.564 { 00:10:38.564 "name": "BaseBdev1", 00:10:38.564 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:38.564 "is_configured": true, 00:10:38.564 "data_offset": 2048, 00:10:38.564 "data_size": 63488 00:10:38.564 }, 00:10:38.564 { 00:10:38.564 "name": "BaseBdev2", 00:10:38.564 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:38.564 "is_configured": true, 00:10:38.564 "data_offset": 2048, 00:10:38.564 "data_size": 63488 00:10:38.564 }, 00:10:38.564 { 00:10:38.564 "name": "BaseBdev3", 00:10:38.564 "uuid": "2027a6ef-2e43-4dcd-b711-d0c478b46efd", 00:10:38.564 "is_configured": true, 00:10:38.564 "data_offset": 2048, 00:10:38.564 "data_size": 63488 00:10:38.564 }, 00:10:38.564 { 00:10:38.564 "name": "BaseBdev4", 00:10:38.564 "uuid": "cd0377b6-eedd-4370-a17f-69c71087fd4d", 00:10:38.564 "is_configured": true, 00:10:38.564 "data_offset": 2048, 00:10:38.564 "data_size": 63488 00:10:38.564 } 00:10:38.564 ] 00:10:38.564 }' 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.564 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.823 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.083 [2024-11-21 04:56:55.554831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.083 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.083 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.083 "name": "Existed_Raid", 00:10:39.083 "aliases": [ 00:10:39.083 "70d3751d-8a48-4eef-86df-045d7adf718f" 00:10:39.083 ], 00:10:39.083 "product_name": "Raid Volume", 00:10:39.083 "block_size": 512, 00:10:39.083 "num_blocks": 63488, 00:10:39.083 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:39.083 "assigned_rate_limits": { 00:10:39.083 "rw_ios_per_sec": 0, 00:10:39.083 "rw_mbytes_per_sec": 0, 00:10:39.083 "r_mbytes_per_sec": 0, 00:10:39.083 "w_mbytes_per_sec": 0 00:10:39.083 }, 00:10:39.083 "claimed": false, 00:10:39.083 "zoned": false, 00:10:39.083 "supported_io_types": { 00:10:39.083 "read": true, 00:10:39.083 "write": true, 00:10:39.083 "unmap": false, 00:10:39.083 "flush": false, 00:10:39.083 "reset": true, 00:10:39.083 "nvme_admin": false, 00:10:39.083 "nvme_io": false, 00:10:39.083 "nvme_io_md": false, 00:10:39.083 "write_zeroes": true, 00:10:39.083 "zcopy": false, 00:10:39.083 "get_zone_info": false, 00:10:39.083 "zone_management": false, 00:10:39.083 "zone_append": false, 00:10:39.083 "compare": false, 00:10:39.083 "compare_and_write": false, 00:10:39.083 "abort": false, 00:10:39.083 "seek_hole": false, 00:10:39.083 "seek_data": false, 00:10:39.083 "copy": false, 00:10:39.083 "nvme_iov_md": false 00:10:39.083 }, 00:10:39.083 "memory_domains": [ 00:10:39.083 { 00:10:39.083 "dma_device_id": "system", 00:10:39.083 "dma_device_type": 1 00:10:39.083 }, 00:10:39.083 { 00:10:39.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.083 "dma_device_type": 2 00:10:39.083 }, 00:10:39.083 { 00:10:39.083 "dma_device_id": "system", 00:10:39.083 "dma_device_type": 1 00:10:39.083 }, 00:10:39.083 { 00:10:39.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.083 "dma_device_type": 2 00:10:39.083 }, 00:10:39.083 { 00:10:39.083 "dma_device_id": "system", 00:10:39.083 "dma_device_type": 1 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.084 "dma_device_type": 2 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "dma_device_id": "system", 00:10:39.084 "dma_device_type": 1 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.084 "dma_device_type": 2 00:10:39.084 } 00:10:39.084 ], 00:10:39.084 "driver_specific": { 00:10:39.084 "raid": { 00:10:39.084 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:39.084 "strip_size_kb": 0, 00:10:39.084 "state": "online", 00:10:39.084 "raid_level": "raid1", 00:10:39.084 "superblock": true, 00:10:39.084 "num_base_bdevs": 4, 00:10:39.084 "num_base_bdevs_discovered": 4, 00:10:39.084 "num_base_bdevs_operational": 4, 00:10:39.084 "base_bdevs_list": [ 00:10:39.084 { 00:10:39.084 "name": "BaseBdev1", 00:10:39.084 "uuid": "5e070b60-a519-4d88-8f31-f30c275ba79d", 00:10:39.084 "is_configured": true, 00:10:39.084 "data_offset": 2048, 00:10:39.084 "data_size": 63488 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "name": "BaseBdev2", 00:10:39.084 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:39.084 "is_configured": true, 00:10:39.084 "data_offset": 2048, 00:10:39.084 "data_size": 63488 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "name": "BaseBdev3", 00:10:39.084 "uuid": "2027a6ef-2e43-4dcd-b711-d0c478b46efd", 00:10:39.084 "is_configured": true, 00:10:39.084 "data_offset": 2048, 00:10:39.084 "data_size": 63488 00:10:39.084 }, 00:10:39.084 { 00:10:39.084 "name": "BaseBdev4", 00:10:39.084 "uuid": "cd0377b6-eedd-4370-a17f-69c71087fd4d", 00:10:39.084 "is_configured": true, 00:10:39.084 "data_offset": 2048, 00:10:39.084 "data_size": 63488 00:10:39.084 } 00:10:39.084 ] 00:10:39.084 } 00:10:39.084 } 00:10:39.084 }' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.084 BaseBdev2 00:10:39.084 BaseBdev3 00:10:39.084 BaseBdev4' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.084 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.343 [2024-11-21 04:56:55.889924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.343 "name": "Existed_Raid", 00:10:39.343 "uuid": "70d3751d-8a48-4eef-86df-045d7adf718f", 00:10:39.343 "strip_size_kb": 0, 00:10:39.343 "state": "online", 00:10:39.343 "raid_level": "raid1", 00:10:39.343 "superblock": true, 00:10:39.343 "num_base_bdevs": 4, 00:10:39.343 "num_base_bdevs_discovered": 3, 00:10:39.343 "num_base_bdevs_operational": 3, 00:10:39.343 "base_bdevs_list": [ 00:10:39.343 { 00:10:39.343 "name": null, 00:10:39.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.343 "is_configured": false, 00:10:39.343 "data_offset": 0, 00:10:39.343 "data_size": 63488 00:10:39.343 }, 00:10:39.343 { 00:10:39.343 "name": "BaseBdev2", 00:10:39.343 "uuid": "c3b5da08-bc96-47a1-8167-31618b5319e3", 00:10:39.343 "is_configured": true, 00:10:39.343 "data_offset": 2048, 00:10:39.343 "data_size": 63488 00:10:39.343 }, 00:10:39.343 { 00:10:39.343 "name": "BaseBdev3", 00:10:39.343 "uuid": "2027a6ef-2e43-4dcd-b711-d0c478b46efd", 00:10:39.343 "is_configured": true, 00:10:39.343 "data_offset": 2048, 00:10:39.343 "data_size": 63488 00:10:39.343 }, 00:10:39.343 { 00:10:39.343 "name": "BaseBdev4", 00:10:39.343 "uuid": "cd0377b6-eedd-4370-a17f-69c71087fd4d", 00:10:39.343 "is_configured": true, 00:10:39.343 "data_offset": 2048, 00:10:39.343 "data_size": 63488 00:10:39.343 } 00:10:39.343 ] 00:10:39.343 }' 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.343 04:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.911 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 [2024-11-21 04:56:56.424752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 [2024-11-21 04:56:56.496466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 [2024-11-21 04:56:56.567761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:39.912 [2024-11-21 04:56:56.567873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.912 [2024-11-21 04:56:56.580008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.912 [2024-11-21 04:56:56.580070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.912 [2024-11-21 04:56:56.580083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.912 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 BaseBdev2 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 [ 00:10:40.172 { 00:10:40.172 "name": "BaseBdev2", 00:10:40.172 "aliases": [ 00:10:40.172 "a7326f7e-1694-4fcc-a81c-e769c398fb2a" 00:10:40.172 ], 00:10:40.172 "product_name": "Malloc disk", 00:10:40.172 "block_size": 512, 00:10:40.172 "num_blocks": 65536, 00:10:40.172 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:40.172 "assigned_rate_limits": { 00:10:40.172 "rw_ios_per_sec": 0, 00:10:40.172 "rw_mbytes_per_sec": 0, 00:10:40.172 "r_mbytes_per_sec": 0, 00:10:40.172 "w_mbytes_per_sec": 0 00:10:40.172 }, 00:10:40.172 "claimed": false, 00:10:40.172 "zoned": false, 00:10:40.172 "supported_io_types": { 00:10:40.172 "read": true, 00:10:40.172 "write": true, 00:10:40.172 "unmap": true, 00:10:40.172 "flush": true, 00:10:40.172 "reset": true, 00:10:40.172 "nvme_admin": false, 00:10:40.172 "nvme_io": false, 00:10:40.172 "nvme_io_md": false, 00:10:40.172 "write_zeroes": true, 00:10:40.173 "zcopy": true, 00:10:40.173 "get_zone_info": false, 00:10:40.173 "zone_management": false, 00:10:40.173 "zone_append": false, 00:10:40.173 "compare": false, 00:10:40.173 "compare_and_write": false, 00:10:40.173 "abort": true, 00:10:40.173 "seek_hole": false, 00:10:40.173 "seek_data": false, 00:10:40.173 "copy": true, 00:10:40.173 "nvme_iov_md": false 00:10:40.173 }, 00:10:40.173 "memory_domains": [ 00:10:40.173 { 00:10:40.173 "dma_device_id": "system", 00:10:40.173 "dma_device_type": 1 00:10:40.173 }, 00:10:40.173 { 00:10:40.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.173 "dma_device_type": 2 00:10:40.173 } 00:10:40.173 ], 00:10:40.173 "driver_specific": {} 00:10:40.173 } 00:10:40.173 ] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 BaseBdev3 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 [ 00:10:40.173 { 00:10:40.173 "name": "BaseBdev3", 00:10:40.173 "aliases": [ 00:10:40.173 "82277e17-ff8d-48fe-8e6b-e53b543cd812" 00:10:40.173 ], 00:10:40.173 "product_name": "Malloc disk", 00:10:40.173 "block_size": 512, 00:10:40.173 "num_blocks": 65536, 00:10:40.173 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:40.173 "assigned_rate_limits": { 00:10:40.173 "rw_ios_per_sec": 0, 00:10:40.173 "rw_mbytes_per_sec": 0, 00:10:40.173 "r_mbytes_per_sec": 0, 00:10:40.173 "w_mbytes_per_sec": 0 00:10:40.173 }, 00:10:40.173 "claimed": false, 00:10:40.173 "zoned": false, 00:10:40.173 "supported_io_types": { 00:10:40.173 "read": true, 00:10:40.173 "write": true, 00:10:40.173 "unmap": true, 00:10:40.173 "flush": true, 00:10:40.173 "reset": true, 00:10:40.173 "nvme_admin": false, 00:10:40.173 "nvme_io": false, 00:10:40.173 "nvme_io_md": false, 00:10:40.173 "write_zeroes": true, 00:10:40.173 "zcopy": true, 00:10:40.173 "get_zone_info": false, 00:10:40.173 "zone_management": false, 00:10:40.173 "zone_append": false, 00:10:40.173 "compare": false, 00:10:40.173 "compare_and_write": false, 00:10:40.173 "abort": true, 00:10:40.173 "seek_hole": false, 00:10:40.173 "seek_data": false, 00:10:40.173 "copy": true, 00:10:40.173 "nvme_iov_md": false 00:10:40.173 }, 00:10:40.173 "memory_domains": [ 00:10:40.173 { 00:10:40.173 "dma_device_id": "system", 00:10:40.173 "dma_device_type": 1 00:10:40.173 }, 00:10:40.173 { 00:10:40.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.173 "dma_device_type": 2 00:10:40.173 } 00:10:40.173 ], 00:10:40.173 "driver_specific": {} 00:10:40.173 } 00:10:40.173 ] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 BaseBdev4 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 [ 00:10:40.173 { 00:10:40.173 "name": "BaseBdev4", 00:10:40.173 "aliases": [ 00:10:40.173 "d6869b6e-73f0-4594-95d8-e4c36360f73c" 00:10:40.173 ], 00:10:40.173 "product_name": "Malloc disk", 00:10:40.173 "block_size": 512, 00:10:40.173 "num_blocks": 65536, 00:10:40.173 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:40.173 "assigned_rate_limits": { 00:10:40.173 "rw_ios_per_sec": 0, 00:10:40.173 "rw_mbytes_per_sec": 0, 00:10:40.173 "r_mbytes_per_sec": 0, 00:10:40.173 "w_mbytes_per_sec": 0 00:10:40.173 }, 00:10:40.173 "claimed": false, 00:10:40.173 "zoned": false, 00:10:40.173 "supported_io_types": { 00:10:40.173 "read": true, 00:10:40.173 "write": true, 00:10:40.173 "unmap": true, 00:10:40.173 "flush": true, 00:10:40.173 "reset": true, 00:10:40.173 "nvme_admin": false, 00:10:40.173 "nvme_io": false, 00:10:40.173 "nvme_io_md": false, 00:10:40.173 "write_zeroes": true, 00:10:40.173 "zcopy": true, 00:10:40.173 "get_zone_info": false, 00:10:40.173 "zone_management": false, 00:10:40.173 "zone_append": false, 00:10:40.173 "compare": false, 00:10:40.173 "compare_and_write": false, 00:10:40.173 "abort": true, 00:10:40.173 "seek_hole": false, 00:10:40.173 "seek_data": false, 00:10:40.173 "copy": true, 00:10:40.173 "nvme_iov_md": false 00:10:40.173 }, 00:10:40.173 "memory_domains": [ 00:10:40.173 { 00:10:40.173 "dma_device_id": "system", 00:10:40.173 "dma_device_type": 1 00:10:40.173 }, 00:10:40.173 { 00:10:40.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.173 "dma_device_type": 2 00:10:40.173 } 00:10:40.173 ], 00:10:40.173 "driver_specific": {} 00:10:40.173 } 00:10:40.173 ] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.173 [2024-11-21 04:56:56.798752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.173 [2024-11-21 04:56:56.798836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.173 [2024-11-21 04:56:56.798874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.173 [2024-11-21 04:56:56.800914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.173 [2024-11-21 04:56:56.800999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.173 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.174 "name": "Existed_Raid", 00:10:40.174 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:40.174 "strip_size_kb": 0, 00:10:40.174 "state": "configuring", 00:10:40.174 "raid_level": "raid1", 00:10:40.174 "superblock": true, 00:10:40.174 "num_base_bdevs": 4, 00:10:40.174 "num_base_bdevs_discovered": 3, 00:10:40.174 "num_base_bdevs_operational": 4, 00:10:40.174 "base_bdevs_list": [ 00:10:40.174 { 00:10:40.174 "name": "BaseBdev1", 00:10:40.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.174 "is_configured": false, 00:10:40.174 "data_offset": 0, 00:10:40.174 "data_size": 0 00:10:40.174 }, 00:10:40.174 { 00:10:40.174 "name": "BaseBdev2", 00:10:40.174 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:40.174 "is_configured": true, 00:10:40.174 "data_offset": 2048, 00:10:40.174 "data_size": 63488 00:10:40.174 }, 00:10:40.174 { 00:10:40.174 "name": "BaseBdev3", 00:10:40.174 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:40.174 "is_configured": true, 00:10:40.174 "data_offset": 2048, 00:10:40.174 "data_size": 63488 00:10:40.174 }, 00:10:40.174 { 00:10:40.174 "name": "BaseBdev4", 00:10:40.174 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:40.174 "is_configured": true, 00:10:40.174 "data_offset": 2048, 00:10:40.174 "data_size": 63488 00:10:40.174 } 00:10:40.174 ] 00:10:40.174 }' 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.174 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.741 [2024-11-21 04:56:57.234131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.741 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.742 "name": "Existed_Raid", 00:10:40.742 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:40.742 "strip_size_kb": 0, 00:10:40.742 "state": "configuring", 00:10:40.742 "raid_level": "raid1", 00:10:40.742 "superblock": true, 00:10:40.742 "num_base_bdevs": 4, 00:10:40.742 "num_base_bdevs_discovered": 2, 00:10:40.742 "num_base_bdevs_operational": 4, 00:10:40.742 "base_bdevs_list": [ 00:10:40.742 { 00:10:40.742 "name": "BaseBdev1", 00:10:40.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.742 "is_configured": false, 00:10:40.742 "data_offset": 0, 00:10:40.742 "data_size": 0 00:10:40.742 }, 00:10:40.742 { 00:10:40.742 "name": null, 00:10:40.742 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:40.742 "is_configured": false, 00:10:40.742 "data_offset": 0, 00:10:40.742 "data_size": 63488 00:10:40.742 }, 00:10:40.742 { 00:10:40.742 "name": "BaseBdev3", 00:10:40.742 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:40.742 "is_configured": true, 00:10:40.742 "data_offset": 2048, 00:10:40.742 "data_size": 63488 00:10:40.742 }, 00:10:40.742 { 00:10:40.742 "name": "BaseBdev4", 00:10:40.742 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:40.742 "is_configured": true, 00:10:40.742 "data_offset": 2048, 00:10:40.742 "data_size": 63488 00:10:40.742 } 00:10:40.742 ] 00:10:40.742 }' 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.742 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 [2024-11-21 04:56:57.740474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.261 BaseBdev1 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 [ 00:10:41.261 { 00:10:41.261 "name": "BaseBdev1", 00:10:41.261 "aliases": [ 00:10:41.261 "df2009eb-cb25-435d-a8a9-271db8171485" 00:10:41.261 ], 00:10:41.261 "product_name": "Malloc disk", 00:10:41.261 "block_size": 512, 00:10:41.261 "num_blocks": 65536, 00:10:41.261 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:41.261 "assigned_rate_limits": { 00:10:41.261 "rw_ios_per_sec": 0, 00:10:41.261 "rw_mbytes_per_sec": 0, 00:10:41.261 "r_mbytes_per_sec": 0, 00:10:41.261 "w_mbytes_per_sec": 0 00:10:41.261 }, 00:10:41.261 "claimed": true, 00:10:41.261 "claim_type": "exclusive_write", 00:10:41.261 "zoned": false, 00:10:41.261 "supported_io_types": { 00:10:41.261 "read": true, 00:10:41.261 "write": true, 00:10:41.261 "unmap": true, 00:10:41.261 "flush": true, 00:10:41.261 "reset": true, 00:10:41.261 "nvme_admin": false, 00:10:41.261 "nvme_io": false, 00:10:41.261 "nvme_io_md": false, 00:10:41.261 "write_zeroes": true, 00:10:41.261 "zcopy": true, 00:10:41.261 "get_zone_info": false, 00:10:41.261 "zone_management": false, 00:10:41.261 "zone_append": false, 00:10:41.261 "compare": false, 00:10:41.261 "compare_and_write": false, 00:10:41.261 "abort": true, 00:10:41.261 "seek_hole": false, 00:10:41.261 "seek_data": false, 00:10:41.261 "copy": true, 00:10:41.261 "nvme_iov_md": false 00:10:41.261 }, 00:10:41.261 "memory_domains": [ 00:10:41.261 { 00:10:41.261 "dma_device_id": "system", 00:10:41.261 "dma_device_type": 1 00:10:41.261 }, 00:10:41.261 { 00:10:41.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.261 "dma_device_type": 2 00:10:41.261 } 00:10:41.261 ], 00:10:41.261 "driver_specific": {} 00:10:41.261 } 00:10:41.261 ] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.261 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.261 "name": "Existed_Raid", 00:10:41.262 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:41.262 "strip_size_kb": 0, 00:10:41.262 "state": "configuring", 00:10:41.262 "raid_level": "raid1", 00:10:41.262 "superblock": true, 00:10:41.262 "num_base_bdevs": 4, 00:10:41.262 "num_base_bdevs_discovered": 3, 00:10:41.262 "num_base_bdevs_operational": 4, 00:10:41.262 "base_bdevs_list": [ 00:10:41.262 { 00:10:41.262 "name": "BaseBdev1", 00:10:41.262 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:41.262 "is_configured": true, 00:10:41.262 "data_offset": 2048, 00:10:41.262 "data_size": 63488 00:10:41.262 }, 00:10:41.262 { 00:10:41.262 "name": null, 00:10:41.262 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:41.262 "is_configured": false, 00:10:41.262 "data_offset": 0, 00:10:41.262 "data_size": 63488 00:10:41.262 }, 00:10:41.262 { 00:10:41.262 "name": "BaseBdev3", 00:10:41.262 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:41.262 "is_configured": true, 00:10:41.262 "data_offset": 2048, 00:10:41.262 "data_size": 63488 00:10:41.262 }, 00:10:41.262 { 00:10:41.262 "name": "BaseBdev4", 00:10:41.262 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:41.262 "is_configured": true, 00:10:41.262 "data_offset": 2048, 00:10:41.262 "data_size": 63488 00:10:41.262 } 00:10:41.262 ] 00:10:41.262 }' 00:10:41.262 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.262 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.521 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.781 [2024-11-21 04:56:58.299619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.781 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.781 "name": "Existed_Raid", 00:10:41.781 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:41.781 "strip_size_kb": 0, 00:10:41.781 "state": "configuring", 00:10:41.781 "raid_level": "raid1", 00:10:41.781 "superblock": true, 00:10:41.781 "num_base_bdevs": 4, 00:10:41.781 "num_base_bdevs_discovered": 2, 00:10:41.781 "num_base_bdevs_operational": 4, 00:10:41.781 "base_bdevs_list": [ 00:10:41.781 { 00:10:41.781 "name": "BaseBdev1", 00:10:41.781 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:41.781 "is_configured": true, 00:10:41.781 "data_offset": 2048, 00:10:41.781 "data_size": 63488 00:10:41.781 }, 00:10:41.781 { 00:10:41.781 "name": null, 00:10:41.781 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:41.781 "is_configured": false, 00:10:41.781 "data_offset": 0, 00:10:41.781 "data_size": 63488 00:10:41.781 }, 00:10:41.781 { 00:10:41.781 "name": null, 00:10:41.781 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:41.781 "is_configured": false, 00:10:41.781 "data_offset": 0, 00:10:41.781 "data_size": 63488 00:10:41.781 }, 00:10:41.781 { 00:10:41.781 "name": "BaseBdev4", 00:10:41.781 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:41.781 "is_configured": true, 00:10:41.782 "data_offset": 2048, 00:10:41.782 "data_size": 63488 00:10:41.782 } 00:10:41.782 ] 00:10:41.782 }' 00:10:41.782 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.782 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.350 [2024-11-21 04:56:58.874691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.350 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.351 "name": "Existed_Raid", 00:10:42.351 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:42.351 "strip_size_kb": 0, 00:10:42.351 "state": "configuring", 00:10:42.351 "raid_level": "raid1", 00:10:42.351 "superblock": true, 00:10:42.351 "num_base_bdevs": 4, 00:10:42.351 "num_base_bdevs_discovered": 3, 00:10:42.351 "num_base_bdevs_operational": 4, 00:10:42.351 "base_bdevs_list": [ 00:10:42.351 { 00:10:42.351 "name": "BaseBdev1", 00:10:42.351 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:42.351 "is_configured": true, 00:10:42.351 "data_offset": 2048, 00:10:42.351 "data_size": 63488 00:10:42.351 }, 00:10:42.351 { 00:10:42.351 "name": null, 00:10:42.351 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:42.351 "is_configured": false, 00:10:42.351 "data_offset": 0, 00:10:42.351 "data_size": 63488 00:10:42.351 }, 00:10:42.351 { 00:10:42.351 "name": "BaseBdev3", 00:10:42.351 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:42.351 "is_configured": true, 00:10:42.351 "data_offset": 2048, 00:10:42.351 "data_size": 63488 00:10:42.351 }, 00:10:42.351 { 00:10:42.351 "name": "BaseBdev4", 00:10:42.351 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:42.351 "is_configured": true, 00:10:42.351 "data_offset": 2048, 00:10:42.351 "data_size": 63488 00:10:42.351 } 00:10:42.351 ] 00:10:42.351 }' 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.351 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.918 [2024-11-21 04:56:59.405811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.918 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.919 "name": "Existed_Raid", 00:10:42.919 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:42.919 "strip_size_kb": 0, 00:10:42.919 "state": "configuring", 00:10:42.919 "raid_level": "raid1", 00:10:42.919 "superblock": true, 00:10:42.919 "num_base_bdevs": 4, 00:10:42.919 "num_base_bdevs_discovered": 2, 00:10:42.919 "num_base_bdevs_operational": 4, 00:10:42.919 "base_bdevs_list": [ 00:10:42.919 { 00:10:42.919 "name": null, 00:10:42.919 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:42.919 "is_configured": false, 00:10:42.919 "data_offset": 0, 00:10:42.919 "data_size": 63488 00:10:42.919 }, 00:10:42.919 { 00:10:42.919 "name": null, 00:10:42.919 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:42.919 "is_configured": false, 00:10:42.919 "data_offset": 0, 00:10:42.919 "data_size": 63488 00:10:42.919 }, 00:10:42.919 { 00:10:42.919 "name": "BaseBdev3", 00:10:42.919 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:42.919 "is_configured": true, 00:10:42.919 "data_offset": 2048, 00:10:42.919 "data_size": 63488 00:10:42.919 }, 00:10:42.919 { 00:10:42.919 "name": "BaseBdev4", 00:10:42.919 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:42.919 "is_configured": true, 00:10:42.919 "data_offset": 2048, 00:10:42.919 "data_size": 63488 00:10:42.919 } 00:10:42.919 ] 00:10:42.919 }' 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.919 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.178 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.437 [2024-11-21 04:56:59.911571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.437 "name": "Existed_Raid", 00:10:43.437 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:43.437 "strip_size_kb": 0, 00:10:43.437 "state": "configuring", 00:10:43.437 "raid_level": "raid1", 00:10:43.437 "superblock": true, 00:10:43.437 "num_base_bdevs": 4, 00:10:43.437 "num_base_bdevs_discovered": 3, 00:10:43.437 "num_base_bdevs_operational": 4, 00:10:43.437 "base_bdevs_list": [ 00:10:43.437 { 00:10:43.437 "name": null, 00:10:43.437 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:43.437 "is_configured": false, 00:10:43.437 "data_offset": 0, 00:10:43.437 "data_size": 63488 00:10:43.437 }, 00:10:43.437 { 00:10:43.437 "name": "BaseBdev2", 00:10:43.437 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:43.437 "is_configured": true, 00:10:43.437 "data_offset": 2048, 00:10:43.437 "data_size": 63488 00:10:43.437 }, 00:10:43.437 { 00:10:43.437 "name": "BaseBdev3", 00:10:43.437 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:43.437 "is_configured": true, 00:10:43.437 "data_offset": 2048, 00:10:43.437 "data_size": 63488 00:10:43.437 }, 00:10:43.437 { 00:10:43.437 "name": "BaseBdev4", 00:10:43.437 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:43.437 "is_configured": true, 00:10:43.437 "data_offset": 2048, 00:10:43.437 "data_size": 63488 00:10:43.437 } 00:10:43.437 ] 00:10:43.437 }' 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.437 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.695 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.696 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.696 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.696 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.696 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df2009eb-cb25-435d-a8a9-271db8171485 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 NewBaseBdev 00:10:43.955 [2024-11-21 04:57:00.449954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.955 [2024-11-21 04:57:00.450193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:43.955 [2024-11-21 04:57:00.450216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.955 [2024-11-21 04:57:00.450531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:43.955 [2024-11-21 04:57:00.450723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:43.955 [2024-11-21 04:57:00.450744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:43.955 [2024-11-21 04:57:00.450866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.955 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 [ 00:10:43.955 { 00:10:43.955 "name": "NewBaseBdev", 00:10:43.955 "aliases": [ 00:10:43.955 "df2009eb-cb25-435d-a8a9-271db8171485" 00:10:43.955 ], 00:10:43.955 "product_name": "Malloc disk", 00:10:43.955 "block_size": 512, 00:10:43.956 "num_blocks": 65536, 00:10:43.956 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:43.956 "assigned_rate_limits": { 00:10:43.956 "rw_ios_per_sec": 0, 00:10:43.956 "rw_mbytes_per_sec": 0, 00:10:43.956 "r_mbytes_per_sec": 0, 00:10:43.956 "w_mbytes_per_sec": 0 00:10:43.956 }, 00:10:43.956 "claimed": true, 00:10:43.956 "claim_type": "exclusive_write", 00:10:43.956 "zoned": false, 00:10:43.956 "supported_io_types": { 00:10:43.956 "read": true, 00:10:43.956 "write": true, 00:10:43.956 "unmap": true, 00:10:43.956 "flush": true, 00:10:43.956 "reset": true, 00:10:43.956 "nvme_admin": false, 00:10:43.956 "nvme_io": false, 00:10:43.956 "nvme_io_md": false, 00:10:43.956 "write_zeroes": true, 00:10:43.956 "zcopy": true, 00:10:43.956 "get_zone_info": false, 00:10:43.956 "zone_management": false, 00:10:43.956 "zone_append": false, 00:10:43.956 "compare": false, 00:10:43.956 "compare_and_write": false, 00:10:43.956 "abort": true, 00:10:43.956 "seek_hole": false, 00:10:43.956 "seek_data": false, 00:10:43.956 "copy": true, 00:10:43.956 "nvme_iov_md": false 00:10:43.956 }, 00:10:43.956 "memory_domains": [ 00:10:43.956 { 00:10:43.956 "dma_device_id": "system", 00:10:43.956 "dma_device_type": 1 00:10:43.956 }, 00:10:43.956 { 00:10:43.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.956 "dma_device_type": 2 00:10:43.956 } 00:10:43.956 ], 00:10:43.956 "driver_specific": {} 00:10:43.956 } 00:10:43.956 ] 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.956 "name": "Existed_Raid", 00:10:43.956 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:43.956 "strip_size_kb": 0, 00:10:43.956 "state": "online", 00:10:43.956 "raid_level": "raid1", 00:10:43.956 "superblock": true, 00:10:43.956 "num_base_bdevs": 4, 00:10:43.956 "num_base_bdevs_discovered": 4, 00:10:43.956 "num_base_bdevs_operational": 4, 00:10:43.956 "base_bdevs_list": [ 00:10:43.956 { 00:10:43.956 "name": "NewBaseBdev", 00:10:43.956 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:43.956 "is_configured": true, 00:10:43.956 "data_offset": 2048, 00:10:43.956 "data_size": 63488 00:10:43.956 }, 00:10:43.956 { 00:10:43.956 "name": "BaseBdev2", 00:10:43.956 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:43.956 "is_configured": true, 00:10:43.956 "data_offset": 2048, 00:10:43.956 "data_size": 63488 00:10:43.956 }, 00:10:43.956 { 00:10:43.956 "name": "BaseBdev3", 00:10:43.956 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:43.956 "is_configured": true, 00:10:43.956 "data_offset": 2048, 00:10:43.956 "data_size": 63488 00:10:43.956 }, 00:10:43.956 { 00:10:43.956 "name": "BaseBdev4", 00:10:43.956 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:43.956 "is_configured": true, 00:10:43.956 "data_offset": 2048, 00:10:43.956 "data_size": 63488 00:10:43.956 } 00:10:43.956 ] 00:10:43.956 }' 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.956 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.215 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 [2024-11-21 04:57:00.945562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.475 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.475 "name": "Existed_Raid", 00:10:44.475 "aliases": [ 00:10:44.475 "f033a218-5aaf-450f-8e09-48f22e0a37ff" 00:10:44.475 ], 00:10:44.475 "product_name": "Raid Volume", 00:10:44.475 "block_size": 512, 00:10:44.475 "num_blocks": 63488, 00:10:44.475 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:44.475 "assigned_rate_limits": { 00:10:44.475 "rw_ios_per_sec": 0, 00:10:44.475 "rw_mbytes_per_sec": 0, 00:10:44.475 "r_mbytes_per_sec": 0, 00:10:44.475 "w_mbytes_per_sec": 0 00:10:44.475 }, 00:10:44.475 "claimed": false, 00:10:44.475 "zoned": false, 00:10:44.475 "supported_io_types": { 00:10:44.475 "read": true, 00:10:44.475 "write": true, 00:10:44.475 "unmap": false, 00:10:44.475 "flush": false, 00:10:44.475 "reset": true, 00:10:44.475 "nvme_admin": false, 00:10:44.475 "nvme_io": false, 00:10:44.475 "nvme_io_md": false, 00:10:44.475 "write_zeroes": true, 00:10:44.475 "zcopy": false, 00:10:44.475 "get_zone_info": false, 00:10:44.475 "zone_management": false, 00:10:44.475 "zone_append": false, 00:10:44.475 "compare": false, 00:10:44.475 "compare_and_write": false, 00:10:44.475 "abort": false, 00:10:44.475 "seek_hole": false, 00:10:44.475 "seek_data": false, 00:10:44.475 "copy": false, 00:10:44.475 "nvme_iov_md": false 00:10:44.475 }, 00:10:44.475 "memory_domains": [ 00:10:44.475 { 00:10:44.475 "dma_device_id": "system", 00:10:44.475 "dma_device_type": 1 00:10:44.475 }, 00:10:44.475 { 00:10:44.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.475 "dma_device_type": 2 00:10:44.475 }, 00:10:44.475 { 00:10:44.476 "dma_device_id": "system", 00:10:44.476 "dma_device_type": 1 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.476 "dma_device_type": 2 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "dma_device_id": "system", 00:10:44.476 "dma_device_type": 1 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.476 "dma_device_type": 2 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "dma_device_id": "system", 00:10:44.476 "dma_device_type": 1 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.476 "dma_device_type": 2 00:10:44.476 } 00:10:44.476 ], 00:10:44.476 "driver_specific": { 00:10:44.476 "raid": { 00:10:44.476 "uuid": "f033a218-5aaf-450f-8e09-48f22e0a37ff", 00:10:44.476 "strip_size_kb": 0, 00:10:44.476 "state": "online", 00:10:44.476 "raid_level": "raid1", 00:10:44.476 "superblock": true, 00:10:44.476 "num_base_bdevs": 4, 00:10:44.476 "num_base_bdevs_discovered": 4, 00:10:44.476 "num_base_bdevs_operational": 4, 00:10:44.476 "base_bdevs_list": [ 00:10:44.476 { 00:10:44.476 "name": "NewBaseBdev", 00:10:44.476 "uuid": "df2009eb-cb25-435d-a8a9-271db8171485", 00:10:44.476 "is_configured": true, 00:10:44.476 "data_offset": 2048, 00:10:44.476 "data_size": 63488 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "name": "BaseBdev2", 00:10:44.476 "uuid": "a7326f7e-1694-4fcc-a81c-e769c398fb2a", 00:10:44.476 "is_configured": true, 00:10:44.476 "data_offset": 2048, 00:10:44.476 "data_size": 63488 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "name": "BaseBdev3", 00:10:44.476 "uuid": "82277e17-ff8d-48fe-8e6b-e53b543cd812", 00:10:44.476 "is_configured": true, 00:10:44.476 "data_offset": 2048, 00:10:44.476 "data_size": 63488 00:10:44.476 }, 00:10:44.476 { 00:10:44.476 "name": "BaseBdev4", 00:10:44.476 "uuid": "d6869b6e-73f0-4594-95d8-e4c36360f73c", 00:10:44.476 "is_configured": true, 00:10:44.476 "data_offset": 2048, 00:10:44.476 "data_size": 63488 00:10:44.476 } 00:10:44.476 ] 00:10:44.476 } 00:10:44.476 } 00:10:44.476 }' 00:10:44.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.476 BaseBdev2 00:10:44.476 BaseBdev3 00:10:44.476 BaseBdev4' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.476 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.736 [2024-11-21 04:57:01.248699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.736 [2024-11-21 04:57:01.248778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.736 [2024-11-21 04:57:01.248895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.736 [2024-11-21 04:57:01.249260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.736 [2024-11-21 04:57:01.249330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84786 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84786 ']' 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84786 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84786 00:10:44.736 killing process with pid 84786 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.736 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.737 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84786' 00:10:44.737 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84786 00:10:44.737 [2024-11-21 04:57:01.290110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.737 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84786 00:10:44.737 [2024-11-21 04:57:01.333010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.996 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.996 00:10:44.996 real 0m9.967s 00:10:44.996 user 0m17.109s 00:10:44.996 sys 0m2.109s 00:10:44.996 ************************************ 00:10:44.996 END TEST raid_state_function_test_sb 00:10:44.996 ************************************ 00:10:44.996 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.996 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.996 04:57:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:44.996 04:57:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.996 04:57:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.996 04:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.996 ************************************ 00:10:44.996 START TEST raid_superblock_test 00:10:44.996 ************************************ 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85443 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85443 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85443 ']' 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.996 04:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.996 [2024-11-21 04:57:01.726194] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:44.996 [2024-11-21 04:57:01.726449] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85443 ] 00:10:45.262 [2024-11-21 04:57:01.886521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.262 [2024-11-21 04:57:01.915350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.262 [2024-11-21 04:57:01.960402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.262 [2024-11-21 04:57:01.960457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.212 malloc1 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.212 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.212 [2024-11-21 04:57:02.660216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.212 [2024-11-21 04:57:02.660383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.212 [2024-11-21 04:57:02.660458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:46.212 [2024-11-21 04:57:02.660506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.212 [2024-11-21 04:57:02.662958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.213 [2024-11-21 04:57:02.663060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.213 pt1 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 malloc2 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 [2024-11-21 04:57:02.693143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.213 [2024-11-21 04:57:02.693194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.213 [2024-11-21 04:57:02.693210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:46.213 [2024-11-21 04:57:02.693221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.213 [2024-11-21 04:57:02.695299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.213 [2024-11-21 04:57:02.695388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.213 pt2 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 malloc3 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 [2024-11-21 04:57:02.721727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.213 [2024-11-21 04:57:02.721815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.213 [2024-11-21 04:57:02.721850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:46.213 [2024-11-21 04:57:02.721886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.213 [2024-11-21 04:57:02.724069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.213 [2024-11-21 04:57:02.724156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.213 pt3 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 malloc4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 [2024-11-21 04:57:02.764230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.213 [2024-11-21 04:57:02.764362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.213 [2024-11-21 04:57:02.764400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:46.213 [2024-11-21 04:57:02.764440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.213 [2024-11-21 04:57:02.766741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.213 [2024-11-21 04:57:02.766822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.213 pt4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 [2024-11-21 04:57:02.776289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.213 [2024-11-21 04:57:02.778242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.213 [2024-11-21 04:57:02.778343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.213 [2024-11-21 04:57:02.778404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.213 [2024-11-21 04:57:02.778651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:46.213 [2024-11-21 04:57:02.778702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.213 [2024-11-21 04:57:02.779058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:46.213 [2024-11-21 04:57:02.779300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:46.213 [2024-11-21 04:57:02.779347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:46.213 [2024-11-21 04:57:02.779590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.213 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.213 "name": "raid_bdev1", 00:10:46.213 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:46.213 "strip_size_kb": 0, 00:10:46.213 "state": "online", 00:10:46.213 "raid_level": "raid1", 00:10:46.213 "superblock": true, 00:10:46.213 "num_base_bdevs": 4, 00:10:46.213 "num_base_bdevs_discovered": 4, 00:10:46.213 "num_base_bdevs_operational": 4, 00:10:46.213 "base_bdevs_list": [ 00:10:46.213 { 00:10:46.213 "name": "pt1", 00:10:46.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.213 "is_configured": true, 00:10:46.213 "data_offset": 2048, 00:10:46.213 "data_size": 63488 00:10:46.213 }, 00:10:46.213 { 00:10:46.213 "name": "pt2", 00:10:46.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.213 "is_configured": true, 00:10:46.213 "data_offset": 2048, 00:10:46.213 "data_size": 63488 00:10:46.213 }, 00:10:46.213 { 00:10:46.214 "name": "pt3", 00:10:46.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.214 "is_configured": true, 00:10:46.214 "data_offset": 2048, 00:10:46.214 "data_size": 63488 00:10:46.214 }, 00:10:46.214 { 00:10:46.214 "name": "pt4", 00:10:46.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.214 "is_configured": true, 00:10:46.214 "data_offset": 2048, 00:10:46.214 "data_size": 63488 00:10:46.214 } 00:10:46.214 ] 00:10:46.214 }' 00:10:46.214 04:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.214 04:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.475 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.734 [2024-11-21 04:57:03.211806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.734 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.734 "name": "raid_bdev1", 00:10:46.734 "aliases": [ 00:10:46.734 "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e" 00:10:46.734 ], 00:10:46.734 "product_name": "Raid Volume", 00:10:46.734 "block_size": 512, 00:10:46.734 "num_blocks": 63488, 00:10:46.734 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:46.734 "assigned_rate_limits": { 00:10:46.734 "rw_ios_per_sec": 0, 00:10:46.734 "rw_mbytes_per_sec": 0, 00:10:46.734 "r_mbytes_per_sec": 0, 00:10:46.734 "w_mbytes_per_sec": 0 00:10:46.734 }, 00:10:46.734 "claimed": false, 00:10:46.734 "zoned": false, 00:10:46.734 "supported_io_types": { 00:10:46.734 "read": true, 00:10:46.734 "write": true, 00:10:46.734 "unmap": false, 00:10:46.734 "flush": false, 00:10:46.734 "reset": true, 00:10:46.734 "nvme_admin": false, 00:10:46.734 "nvme_io": false, 00:10:46.734 "nvme_io_md": false, 00:10:46.734 "write_zeroes": true, 00:10:46.734 "zcopy": false, 00:10:46.734 "get_zone_info": false, 00:10:46.734 "zone_management": false, 00:10:46.734 "zone_append": false, 00:10:46.734 "compare": false, 00:10:46.734 "compare_and_write": false, 00:10:46.734 "abort": false, 00:10:46.734 "seek_hole": false, 00:10:46.734 "seek_data": false, 00:10:46.734 "copy": false, 00:10:46.734 "nvme_iov_md": false 00:10:46.734 }, 00:10:46.734 "memory_domains": [ 00:10:46.734 { 00:10:46.734 "dma_device_id": "system", 00:10:46.734 "dma_device_type": 1 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.734 "dma_device_type": 2 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "system", 00:10:46.734 "dma_device_type": 1 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.734 "dma_device_type": 2 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "system", 00:10:46.734 "dma_device_type": 1 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.734 "dma_device_type": 2 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "system", 00:10:46.734 "dma_device_type": 1 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.734 "dma_device_type": 2 00:10:46.734 } 00:10:46.734 ], 00:10:46.734 "driver_specific": { 00:10:46.734 "raid": { 00:10:46.734 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:46.734 "strip_size_kb": 0, 00:10:46.734 "state": "online", 00:10:46.734 "raid_level": "raid1", 00:10:46.734 "superblock": true, 00:10:46.734 "num_base_bdevs": 4, 00:10:46.734 "num_base_bdevs_discovered": 4, 00:10:46.734 "num_base_bdevs_operational": 4, 00:10:46.734 "base_bdevs_list": [ 00:10:46.734 { 00:10:46.734 "name": "pt1", 00:10:46.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.735 "is_configured": true, 00:10:46.735 "data_offset": 2048, 00:10:46.735 "data_size": 63488 00:10:46.735 }, 00:10:46.735 { 00:10:46.735 "name": "pt2", 00:10:46.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.735 "is_configured": true, 00:10:46.735 "data_offset": 2048, 00:10:46.735 "data_size": 63488 00:10:46.735 }, 00:10:46.735 { 00:10:46.735 "name": "pt3", 00:10:46.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.735 "is_configured": true, 00:10:46.735 "data_offset": 2048, 00:10:46.735 "data_size": 63488 00:10:46.735 }, 00:10:46.735 { 00:10:46.735 "name": "pt4", 00:10:46.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.735 "is_configured": true, 00:10:46.735 "data_offset": 2048, 00:10:46.735 "data_size": 63488 00:10:46.735 } 00:10:46.735 ] 00:10:46.735 } 00:10:46.735 } 00:10:46.735 }' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.735 pt2 00:10:46.735 pt3 00:10:46.735 pt4' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.735 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 [2024-11-21 04:57:03.547224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc122563-0e9a-48fc-a3b9-be4d7b49cd3e 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc122563-0e9a-48fc-a3b9-be4d7b49cd3e ']' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 [2024-11-21 04:57:03.578867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.995 [2024-11-21 04:57:03.578894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.995 [2024-11-21 04:57:03.578965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.995 [2024-11-21 04:57:03.579054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.995 [2024-11-21 04:57:03.579065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.995 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.255 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:47.255 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.256 [2024-11-21 04:57:03.738604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:47.256 [2024-11-21 04:57:03.740552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:47.256 [2024-11-21 04:57:03.740603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:47.256 [2024-11-21 04:57:03.740630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:47.256 [2024-11-21 04:57:03.740676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:47.256 [2024-11-21 04:57:03.740725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:47.256 [2024-11-21 04:57:03.740744] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:47.256 [2024-11-21 04:57:03.740760] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:47.256 [2024-11-21 04:57:03.740774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.256 [2024-11-21 04:57:03.740784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:47.256 request: 00:10:47.256 { 00:10:47.256 "name": "raid_bdev1", 00:10:47.256 "raid_level": "raid1", 00:10:47.256 "base_bdevs": [ 00:10:47.256 "malloc1", 00:10:47.256 "malloc2", 00:10:47.256 "malloc3", 00:10:47.256 "malloc4" 00:10:47.256 ], 00:10:47.256 "superblock": false, 00:10:47.256 "method": "bdev_raid_create", 00:10:47.256 "req_id": 1 00:10:47.256 } 00:10:47.256 Got JSON-RPC error response 00:10:47.256 response: 00:10:47.256 { 00:10:47.256 "code": -17, 00:10:47.256 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:47.256 } 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.256 [2024-11-21 04:57:03.806428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.256 [2024-11-21 04:57:03.806521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.256 [2024-11-21 04:57:03.806559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.256 [2024-11-21 04:57:03.806588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.256 [2024-11-21 04:57:03.808745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.256 [2024-11-21 04:57:03.808813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.256 [2024-11-21 04:57:03.808897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.256 [2024-11-21 04:57:03.808946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.256 pt1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.256 "name": "raid_bdev1", 00:10:47.256 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:47.256 "strip_size_kb": 0, 00:10:47.256 "state": "configuring", 00:10:47.256 "raid_level": "raid1", 00:10:47.256 "superblock": true, 00:10:47.256 "num_base_bdevs": 4, 00:10:47.256 "num_base_bdevs_discovered": 1, 00:10:47.256 "num_base_bdevs_operational": 4, 00:10:47.256 "base_bdevs_list": [ 00:10:47.256 { 00:10:47.256 "name": "pt1", 00:10:47.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.256 "is_configured": true, 00:10:47.256 "data_offset": 2048, 00:10:47.256 "data_size": 63488 00:10:47.256 }, 00:10:47.256 { 00:10:47.256 "name": null, 00:10:47.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.256 "is_configured": false, 00:10:47.256 "data_offset": 2048, 00:10:47.256 "data_size": 63488 00:10:47.256 }, 00:10:47.256 { 00:10:47.256 "name": null, 00:10:47.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.256 "is_configured": false, 00:10:47.256 "data_offset": 2048, 00:10:47.256 "data_size": 63488 00:10:47.256 }, 00:10:47.256 { 00:10:47.256 "name": null, 00:10:47.256 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.256 "is_configured": false, 00:10:47.256 "data_offset": 2048, 00:10:47.256 "data_size": 63488 00:10:47.256 } 00:10:47.256 ] 00:10:47.256 }' 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.256 04:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.517 [2024-11-21 04:57:04.241724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.517 [2024-11-21 04:57:04.241785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.517 [2024-11-21 04:57:04.241806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:47.517 [2024-11-21 04:57:04.241816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.517 [2024-11-21 04:57:04.242226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.517 [2024-11-21 04:57:04.242255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.517 [2024-11-21 04:57:04.242343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:47.517 [2024-11-21 04:57:04.242376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.517 pt2 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.517 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.776 [2024-11-21 04:57:04.249709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.776 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.777 "name": "raid_bdev1", 00:10:47.777 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:47.777 "strip_size_kb": 0, 00:10:47.777 "state": "configuring", 00:10:47.777 "raid_level": "raid1", 00:10:47.777 "superblock": true, 00:10:47.777 "num_base_bdevs": 4, 00:10:47.777 "num_base_bdevs_discovered": 1, 00:10:47.777 "num_base_bdevs_operational": 4, 00:10:47.777 "base_bdevs_list": [ 00:10:47.777 { 00:10:47.777 "name": "pt1", 00:10:47.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.777 "is_configured": true, 00:10:47.777 "data_offset": 2048, 00:10:47.777 "data_size": 63488 00:10:47.777 }, 00:10:47.777 { 00:10:47.777 "name": null, 00:10:47.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.777 "is_configured": false, 00:10:47.777 "data_offset": 0, 00:10:47.777 "data_size": 63488 00:10:47.777 }, 00:10:47.777 { 00:10:47.777 "name": null, 00:10:47.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.777 "is_configured": false, 00:10:47.777 "data_offset": 2048, 00:10:47.777 "data_size": 63488 00:10:47.777 }, 00:10:47.777 { 00:10:47.777 "name": null, 00:10:47.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.777 "is_configured": false, 00:10:47.777 "data_offset": 2048, 00:10:47.777 "data_size": 63488 00:10:47.777 } 00:10:47.777 ] 00:10:47.777 }' 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.777 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.036 [2024-11-21 04:57:04.688962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.036 [2024-11-21 04:57:04.689105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.036 [2024-11-21 04:57:04.689153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:48.036 [2024-11-21 04:57:04.689194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.036 [2024-11-21 04:57:04.689652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.036 [2024-11-21 04:57:04.689712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.036 [2024-11-21 04:57:04.689838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.036 [2024-11-21 04:57:04.689900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.036 pt2 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.036 [2024-11-21 04:57:04.700913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.036 [2024-11-21 04:57:04.701001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.036 [2024-11-21 04:57:04.701039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:48.036 [2024-11-21 04:57:04.701070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.036 [2024-11-21 04:57:04.701463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.036 [2024-11-21 04:57:04.701521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.036 [2024-11-21 04:57:04.701618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.036 [2024-11-21 04:57:04.701667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.036 pt3 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.036 [2024-11-21 04:57:04.712871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:48.036 [2024-11-21 04:57:04.712917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.036 [2024-11-21 04:57:04.712930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:48.036 [2024-11-21 04:57:04.712939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.036 [2024-11-21 04:57:04.713239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.036 [2024-11-21 04:57:04.713258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:48.036 [2024-11-21 04:57:04.713304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:48.036 [2024-11-21 04:57:04.713321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:48.036 [2024-11-21 04:57:04.713425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:48.036 [2024-11-21 04:57:04.713440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.036 [2024-11-21 04:57:04.713650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:48.036 [2024-11-21 04:57:04.713768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:48.036 [2024-11-21 04:57:04.713777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:48.036 [2024-11-21 04:57:04.713873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.036 pt4 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.036 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.037 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.037 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.037 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.296 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.296 "name": "raid_bdev1", 00:10:48.296 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:48.296 "strip_size_kb": 0, 00:10:48.296 "state": "online", 00:10:48.296 "raid_level": "raid1", 00:10:48.296 "superblock": true, 00:10:48.296 "num_base_bdevs": 4, 00:10:48.296 "num_base_bdevs_discovered": 4, 00:10:48.296 "num_base_bdevs_operational": 4, 00:10:48.296 "base_bdevs_list": [ 00:10:48.296 { 00:10:48.296 "name": "pt1", 00:10:48.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.296 "is_configured": true, 00:10:48.296 "data_offset": 2048, 00:10:48.296 "data_size": 63488 00:10:48.296 }, 00:10:48.296 { 00:10:48.296 "name": "pt2", 00:10:48.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.296 "is_configured": true, 00:10:48.296 "data_offset": 2048, 00:10:48.296 "data_size": 63488 00:10:48.296 }, 00:10:48.296 { 00:10:48.296 "name": "pt3", 00:10:48.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.296 "is_configured": true, 00:10:48.296 "data_offset": 2048, 00:10:48.296 "data_size": 63488 00:10:48.296 }, 00:10:48.296 { 00:10:48.296 "name": "pt4", 00:10:48.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.296 "is_configured": true, 00:10:48.296 "data_offset": 2048, 00:10:48.296 "data_size": 63488 00:10:48.296 } 00:10:48.296 ] 00:10:48.296 }' 00:10:48.296 04:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.296 04:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.555 [2024-11-21 04:57:05.124546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.555 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.555 "name": "raid_bdev1", 00:10:48.555 "aliases": [ 00:10:48.555 "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e" 00:10:48.555 ], 00:10:48.555 "product_name": "Raid Volume", 00:10:48.555 "block_size": 512, 00:10:48.555 "num_blocks": 63488, 00:10:48.555 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:48.555 "assigned_rate_limits": { 00:10:48.555 "rw_ios_per_sec": 0, 00:10:48.555 "rw_mbytes_per_sec": 0, 00:10:48.555 "r_mbytes_per_sec": 0, 00:10:48.555 "w_mbytes_per_sec": 0 00:10:48.555 }, 00:10:48.555 "claimed": false, 00:10:48.555 "zoned": false, 00:10:48.555 "supported_io_types": { 00:10:48.555 "read": true, 00:10:48.555 "write": true, 00:10:48.555 "unmap": false, 00:10:48.555 "flush": false, 00:10:48.555 "reset": true, 00:10:48.555 "nvme_admin": false, 00:10:48.555 "nvme_io": false, 00:10:48.555 "nvme_io_md": false, 00:10:48.555 "write_zeroes": true, 00:10:48.555 "zcopy": false, 00:10:48.555 "get_zone_info": false, 00:10:48.556 "zone_management": false, 00:10:48.556 "zone_append": false, 00:10:48.556 "compare": false, 00:10:48.556 "compare_and_write": false, 00:10:48.556 "abort": false, 00:10:48.556 "seek_hole": false, 00:10:48.556 "seek_data": false, 00:10:48.556 "copy": false, 00:10:48.556 "nvme_iov_md": false 00:10:48.556 }, 00:10:48.556 "memory_domains": [ 00:10:48.556 { 00:10:48.556 "dma_device_id": "system", 00:10:48.556 "dma_device_type": 1 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.556 "dma_device_type": 2 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "system", 00:10:48.556 "dma_device_type": 1 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.556 "dma_device_type": 2 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "system", 00:10:48.556 "dma_device_type": 1 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.556 "dma_device_type": 2 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "system", 00:10:48.556 "dma_device_type": 1 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.556 "dma_device_type": 2 00:10:48.556 } 00:10:48.556 ], 00:10:48.556 "driver_specific": { 00:10:48.556 "raid": { 00:10:48.556 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:48.556 "strip_size_kb": 0, 00:10:48.556 "state": "online", 00:10:48.556 "raid_level": "raid1", 00:10:48.556 "superblock": true, 00:10:48.556 "num_base_bdevs": 4, 00:10:48.556 "num_base_bdevs_discovered": 4, 00:10:48.556 "num_base_bdevs_operational": 4, 00:10:48.556 "base_bdevs_list": [ 00:10:48.556 { 00:10:48.556 "name": "pt1", 00:10:48.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.556 "is_configured": true, 00:10:48.556 "data_offset": 2048, 00:10:48.556 "data_size": 63488 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "name": "pt2", 00:10:48.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.556 "is_configured": true, 00:10:48.556 "data_offset": 2048, 00:10:48.556 "data_size": 63488 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "name": "pt3", 00:10:48.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.556 "is_configured": true, 00:10:48.556 "data_offset": 2048, 00:10:48.556 "data_size": 63488 00:10:48.556 }, 00:10:48.556 { 00:10:48.556 "name": "pt4", 00:10:48.556 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.556 "is_configured": true, 00:10:48.556 "data_offset": 2048, 00:10:48.556 "data_size": 63488 00:10:48.556 } 00:10:48.556 ] 00:10:48.556 } 00:10:48.556 } 00:10:48.556 }' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:48.556 pt2 00:10:48.556 pt3 00:10:48.556 pt4' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:48.556 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:48.816 [2024-11-21 04:57:05.439962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc122563-0e9a-48fc-a3b9-be4d7b49cd3e '!=' bc122563-0e9a-48fc-a3b9-be4d7b49cd3e ']' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 [2024-11-21 04:57:05.471617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.816 "name": "raid_bdev1", 00:10:48.816 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:48.816 "strip_size_kb": 0, 00:10:48.816 "state": "online", 00:10:48.816 "raid_level": "raid1", 00:10:48.816 "superblock": true, 00:10:48.816 "num_base_bdevs": 4, 00:10:48.816 "num_base_bdevs_discovered": 3, 00:10:48.816 "num_base_bdevs_operational": 3, 00:10:48.816 "base_bdevs_list": [ 00:10:48.816 { 00:10:48.816 "name": null, 00:10:48.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.816 "is_configured": false, 00:10:48.816 "data_offset": 0, 00:10:48.816 "data_size": 63488 00:10:48.816 }, 00:10:48.816 { 00:10:48.816 "name": "pt2", 00:10:48.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.816 "is_configured": true, 00:10:48.816 "data_offset": 2048, 00:10:48.816 "data_size": 63488 00:10:48.816 }, 00:10:48.816 { 00:10:48.816 "name": "pt3", 00:10:48.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.816 "is_configured": true, 00:10:48.816 "data_offset": 2048, 00:10:48.816 "data_size": 63488 00:10:48.816 }, 00:10:48.816 { 00:10:48.816 "name": "pt4", 00:10:48.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.816 "is_configured": true, 00:10:48.816 "data_offset": 2048, 00:10:48.816 "data_size": 63488 00:10:48.816 } 00:10:48.816 ] 00:10:48.816 }' 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.816 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 [2024-11-21 04:57:05.910868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.387 [2024-11-21 04:57:05.910942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.387 [2024-11-21 04:57:05.911083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.387 [2024-11-21 04:57:05.911215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.387 [2024-11-21 04:57:05.911282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 [2024-11-21 04:57:06.010652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.387 [2024-11-21 04:57:06.010707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.387 [2024-11-21 04:57:06.010723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:49.387 [2024-11-21 04:57:06.010734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.387 [2024-11-21 04:57:06.012910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.387 [2024-11-21 04:57:06.012952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.387 [2024-11-21 04:57:06.013025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.387 [2024-11-21 04:57:06.013062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.387 pt2 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.387 "name": "raid_bdev1", 00:10:49.387 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:49.387 "strip_size_kb": 0, 00:10:49.387 "state": "configuring", 00:10:49.387 "raid_level": "raid1", 00:10:49.387 "superblock": true, 00:10:49.387 "num_base_bdevs": 4, 00:10:49.387 "num_base_bdevs_discovered": 1, 00:10:49.387 "num_base_bdevs_operational": 3, 00:10:49.387 "base_bdevs_list": [ 00:10:49.387 { 00:10:49.387 "name": null, 00:10:49.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.387 "is_configured": false, 00:10:49.387 "data_offset": 2048, 00:10:49.387 "data_size": 63488 00:10:49.387 }, 00:10:49.387 { 00:10:49.387 "name": "pt2", 00:10:49.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.387 "is_configured": true, 00:10:49.387 "data_offset": 2048, 00:10:49.387 "data_size": 63488 00:10:49.387 }, 00:10:49.387 { 00:10:49.387 "name": null, 00:10:49.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.387 "is_configured": false, 00:10:49.387 "data_offset": 2048, 00:10:49.387 "data_size": 63488 00:10:49.387 }, 00:10:49.387 { 00:10:49.387 "name": null, 00:10:49.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.387 "is_configured": false, 00:10:49.387 "data_offset": 2048, 00:10:49.387 "data_size": 63488 00:10:49.387 } 00:10:49.387 ] 00:10:49.387 }' 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.387 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 [2024-11-21 04:57:06.465970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.958 [2024-11-21 04:57:06.466120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.958 [2024-11-21 04:57:06.466176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:49.958 [2024-11-21 04:57:06.466213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.958 [2024-11-21 04:57:06.466674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.958 [2024-11-21 04:57:06.466737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.958 [2024-11-21 04:57:06.466868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.958 [2024-11-21 04:57:06.466930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.958 pt3 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.958 "name": "raid_bdev1", 00:10:49.958 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:49.958 "strip_size_kb": 0, 00:10:49.958 "state": "configuring", 00:10:49.958 "raid_level": "raid1", 00:10:49.958 "superblock": true, 00:10:49.958 "num_base_bdevs": 4, 00:10:49.958 "num_base_bdevs_discovered": 2, 00:10:49.958 "num_base_bdevs_operational": 3, 00:10:49.958 "base_bdevs_list": [ 00:10:49.958 { 00:10:49.958 "name": null, 00:10:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.958 "is_configured": false, 00:10:49.958 "data_offset": 2048, 00:10:49.958 "data_size": 63488 00:10:49.958 }, 00:10:49.958 { 00:10:49.958 "name": "pt2", 00:10:49.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.958 "is_configured": true, 00:10:49.958 "data_offset": 2048, 00:10:49.958 "data_size": 63488 00:10:49.958 }, 00:10:49.958 { 00:10:49.958 "name": "pt3", 00:10:49.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.958 "is_configured": true, 00:10:49.958 "data_offset": 2048, 00:10:49.958 "data_size": 63488 00:10:49.958 }, 00:10:49.958 { 00:10:49.958 "name": null, 00:10:49.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.958 "is_configured": false, 00:10:49.958 "data_offset": 2048, 00:10:49.958 "data_size": 63488 00:10:49.958 } 00:10:49.958 ] 00:10:49.958 }' 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.958 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.218 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.218 [2024-11-21 04:57:06.905231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:50.218 [2024-11-21 04:57:06.905300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.218 [2024-11-21 04:57:06.905322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:50.218 [2024-11-21 04:57:06.905333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.219 [2024-11-21 04:57:06.905856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.219 [2024-11-21 04:57:06.905887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:50.219 [2024-11-21 04:57:06.905969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:50.219 [2024-11-21 04:57:06.906006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:50.219 [2024-11-21 04:57:06.906157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:50.219 [2024-11-21 04:57:06.906172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.219 [2024-11-21 04:57:06.906456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:50.219 [2024-11-21 04:57:06.906605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:50.219 [2024-11-21 04:57:06.906616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:50.219 [2024-11-21 04:57:06.906741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.219 pt4 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.219 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.479 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.479 "name": "raid_bdev1", 00:10:50.479 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:50.479 "strip_size_kb": 0, 00:10:50.479 "state": "online", 00:10:50.479 "raid_level": "raid1", 00:10:50.479 "superblock": true, 00:10:50.479 "num_base_bdevs": 4, 00:10:50.479 "num_base_bdevs_discovered": 3, 00:10:50.479 "num_base_bdevs_operational": 3, 00:10:50.479 "base_bdevs_list": [ 00:10:50.479 { 00:10:50.479 "name": null, 00:10:50.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.479 "is_configured": false, 00:10:50.479 "data_offset": 2048, 00:10:50.479 "data_size": 63488 00:10:50.479 }, 00:10:50.479 { 00:10:50.479 "name": "pt2", 00:10:50.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.479 "is_configured": true, 00:10:50.479 "data_offset": 2048, 00:10:50.479 "data_size": 63488 00:10:50.479 }, 00:10:50.479 { 00:10:50.479 "name": "pt3", 00:10:50.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.479 "is_configured": true, 00:10:50.479 "data_offset": 2048, 00:10:50.479 "data_size": 63488 00:10:50.479 }, 00:10:50.479 { 00:10:50.479 "name": "pt4", 00:10:50.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.479 "is_configured": true, 00:10:50.479 "data_offset": 2048, 00:10:50.479 "data_size": 63488 00:10:50.479 } 00:10:50.479 ] 00:10:50.479 }' 00:10:50.479 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.479 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 [2024-11-21 04:57:07.324525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.739 [2024-11-21 04:57:07.324613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.739 [2024-11-21 04:57:07.324739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.739 [2024-11-21 04:57:07.324854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.739 [2024-11-21 04:57:07.324914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 [2024-11-21 04:57:07.400377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.739 [2024-11-21 04:57:07.400437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.739 [2024-11-21 04:57:07.400461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:50.740 [2024-11-21 04:57:07.400470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.740 [2024-11-21 04:57:07.402744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.740 [2024-11-21 04:57:07.402781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.740 [2024-11-21 04:57:07.402854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.740 [2024-11-21 04:57:07.402894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.740 [2024-11-21 04:57:07.403013] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:50.740 [2024-11-21 04:57:07.403026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.740 [2024-11-21 04:57:07.403048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:50.740 [2024-11-21 04:57:07.403082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.740 [2024-11-21 04:57:07.403204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.740 pt1 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.740 "name": "raid_bdev1", 00:10:50.740 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:50.740 "strip_size_kb": 0, 00:10:50.740 "state": "configuring", 00:10:50.740 "raid_level": "raid1", 00:10:50.740 "superblock": true, 00:10:50.740 "num_base_bdevs": 4, 00:10:50.740 "num_base_bdevs_discovered": 2, 00:10:50.740 "num_base_bdevs_operational": 3, 00:10:50.740 "base_bdevs_list": [ 00:10:50.740 { 00:10:50.740 "name": null, 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.740 "is_configured": false, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 }, 00:10:50.740 { 00:10:50.740 "name": "pt2", 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.740 "is_configured": true, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 }, 00:10:50.740 { 00:10:50.740 "name": "pt3", 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.740 "is_configured": true, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 }, 00:10:50.740 { 00:10:50.740 "name": null, 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.740 "is_configured": false, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 } 00:10:50.740 ] 00:10:50.740 }' 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.740 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.309 [2024-11-21 04:57:07.851665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.309 [2024-11-21 04:57:07.851776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.309 [2024-11-21 04:57:07.851834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:51.309 [2024-11-21 04:57:07.851886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.309 [2024-11-21 04:57:07.852381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.309 [2024-11-21 04:57:07.852448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.309 [2024-11-21 04:57:07.852569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:51.309 [2024-11-21 04:57:07.852639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.309 [2024-11-21 04:57:07.852780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:51.309 [2024-11-21 04:57:07.852820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.309 [2024-11-21 04:57:07.853104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:51.309 [2024-11-21 04:57:07.853309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:51.309 [2024-11-21 04:57:07.853346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:51.309 [2024-11-21 04:57:07.853523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.309 pt4 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.309 "name": "raid_bdev1", 00:10:51.309 "uuid": "bc122563-0e9a-48fc-a3b9-be4d7b49cd3e", 00:10:51.309 "strip_size_kb": 0, 00:10:51.309 "state": "online", 00:10:51.309 "raid_level": "raid1", 00:10:51.309 "superblock": true, 00:10:51.309 "num_base_bdevs": 4, 00:10:51.309 "num_base_bdevs_discovered": 3, 00:10:51.309 "num_base_bdevs_operational": 3, 00:10:51.309 "base_bdevs_list": [ 00:10:51.309 { 00:10:51.309 "name": null, 00:10:51.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.309 "is_configured": false, 00:10:51.309 "data_offset": 2048, 00:10:51.309 "data_size": 63488 00:10:51.309 }, 00:10:51.309 { 00:10:51.309 "name": "pt2", 00:10:51.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.309 "is_configured": true, 00:10:51.309 "data_offset": 2048, 00:10:51.309 "data_size": 63488 00:10:51.309 }, 00:10:51.309 { 00:10:51.309 "name": "pt3", 00:10:51.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.309 "is_configured": true, 00:10:51.309 "data_offset": 2048, 00:10:51.309 "data_size": 63488 00:10:51.309 }, 00:10:51.309 { 00:10:51.309 "name": "pt4", 00:10:51.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.309 "is_configured": true, 00:10:51.309 "data_offset": 2048, 00:10:51.309 "data_size": 63488 00:10:51.309 } 00:10:51.309 ] 00:10:51.309 }' 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.309 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.569 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:51.569 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.569 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:51.569 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:51.829 [2024-11-21 04:57:08.343182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bc122563-0e9a-48fc-a3b9-be4d7b49cd3e '!=' bc122563-0e9a-48fc-a3b9-be4d7b49cd3e ']' 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85443 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85443 ']' 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85443 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85443 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85443' 00:10:51.829 killing process with pid 85443 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85443 00:10:51.829 [2024-11-21 04:57:08.427940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.829 [2024-11-21 04:57:08.428054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.829 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85443 00:10:51.829 [2024-11-21 04:57:08.428155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.829 [2024-11-21 04:57:08.428167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:51.829 [2024-11-21 04:57:08.472139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.089 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:52.089 00:10:52.089 real 0m7.056s 00:10:52.089 user 0m11.867s 00:10:52.089 sys 0m1.477s 00:10:52.089 ************************************ 00:10:52.089 END TEST raid_superblock_test 00:10:52.089 ************************************ 00:10:52.089 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.089 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.089 04:57:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:52.089 04:57:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.089 04:57:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.089 04:57:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.089 ************************************ 00:10:52.089 START TEST raid_read_error_test 00:10:52.089 ************************************ 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rDZAtp4m5x 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85914 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85914 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85914 ']' 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.089 04:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.349 [2024-11-21 04:57:08.863816] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:52.349 [2024-11-21 04:57:08.863959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85914 ] 00:10:52.349 [2024-11-21 04:57:09.035519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.349 [2024-11-21 04:57:09.063565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.613 [2024-11-21 04:57:09.107732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.613 [2024-11-21 04:57:09.107771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 BaseBdev1_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 true 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [2024-11-21 04:57:09.734560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.182 [2024-11-21 04:57:09.734615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.182 [2024-11-21 04:57:09.734644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.182 [2024-11-21 04:57:09.734655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.182 [2024-11-21 04:57:09.736853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.182 [2024-11-21 04:57:09.736894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.182 BaseBdev1 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 BaseBdev2_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 true 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [2024-11-21 04:57:09.775315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.182 [2024-11-21 04:57:09.775362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.182 [2024-11-21 04:57:09.775380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.182 [2024-11-21 04:57:09.775389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.182 [2024-11-21 04:57:09.777507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.182 [2024-11-21 04:57:09.777549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.182 BaseBdev2 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 BaseBdev3_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 true 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [2024-11-21 04:57:09.815976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:53.182 [2024-11-21 04:57:09.816024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.182 [2024-11-21 04:57:09.816043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:53.182 [2024-11-21 04:57:09.816052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.182 [2024-11-21 04:57:09.818180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.182 [2024-11-21 04:57:09.818213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:53.182 BaseBdev3 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 BaseBdev4_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 true 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [2024-11-21 04:57:09.866438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:53.182 [2024-11-21 04:57:09.866538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.182 [2024-11-21 04:57:09.866581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.182 [2024-11-21 04:57:09.866592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.182 [2024-11-21 04:57:09.868815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.182 [2024-11-21 04:57:09.868850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:53.182 BaseBdev4 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.182 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [2024-11-21 04:57:09.878466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.182 [2024-11-21 04:57:09.880314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.182 [2024-11-21 04:57:09.880400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.182 [2024-11-21 04:57:09.880453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.182 [2024-11-21 04:57:09.880661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:53.182 [2024-11-21 04:57:09.880673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:53.182 [2024-11-21 04:57:09.880930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:53.183 [2024-11-21 04:57:09.881074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:53.183 [2024-11-21 04:57:09.881094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:53.183 [2024-11-21 04:57:09.881217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.442 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.442 "name": "raid_bdev1", 00:10:53.442 "uuid": "f8b5a795-e745-447f-9069-9360b1c98e4e", 00:10:53.442 "strip_size_kb": 0, 00:10:53.442 "state": "online", 00:10:53.442 "raid_level": "raid1", 00:10:53.442 "superblock": true, 00:10:53.442 "num_base_bdevs": 4, 00:10:53.442 "num_base_bdevs_discovered": 4, 00:10:53.442 "num_base_bdevs_operational": 4, 00:10:53.442 "base_bdevs_list": [ 00:10:53.442 { 00:10:53.442 "name": "BaseBdev1", 00:10:53.442 "uuid": "9c95d923-67b8-50ea-b1a3-0d6b35d3e951", 00:10:53.442 "is_configured": true, 00:10:53.442 "data_offset": 2048, 00:10:53.442 "data_size": 63488 00:10:53.442 }, 00:10:53.442 { 00:10:53.442 "name": "BaseBdev2", 00:10:53.442 "uuid": "3c48c57c-146a-5015-b64c-a4902038f182", 00:10:53.442 "is_configured": true, 00:10:53.442 "data_offset": 2048, 00:10:53.442 "data_size": 63488 00:10:53.442 }, 00:10:53.442 { 00:10:53.442 "name": "BaseBdev3", 00:10:53.442 "uuid": "dcc95e85-49a8-5ac3-abe9-6957faaa1e2f", 00:10:53.442 "is_configured": true, 00:10:53.442 "data_offset": 2048, 00:10:53.442 "data_size": 63488 00:10:53.442 }, 00:10:53.442 { 00:10:53.442 "name": "BaseBdev4", 00:10:53.442 "uuid": "7726a91f-da7d-518f-a041-dd285122778f", 00:10:53.442 "is_configured": true, 00:10:53.442 "data_offset": 2048, 00:10:53.442 "data_size": 63488 00:10:53.442 } 00:10:53.442 ] 00:10:53.442 }' 00:10:53.442 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.442 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.700 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:53.701 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:53.701 [2024-11-21 04:57:10.378025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.639 "name": "raid_bdev1", 00:10:54.639 "uuid": "f8b5a795-e745-447f-9069-9360b1c98e4e", 00:10:54.639 "strip_size_kb": 0, 00:10:54.639 "state": "online", 00:10:54.639 "raid_level": "raid1", 00:10:54.639 "superblock": true, 00:10:54.639 "num_base_bdevs": 4, 00:10:54.639 "num_base_bdevs_discovered": 4, 00:10:54.639 "num_base_bdevs_operational": 4, 00:10:54.639 "base_bdevs_list": [ 00:10:54.639 { 00:10:54.639 "name": "BaseBdev1", 00:10:54.639 "uuid": "9c95d923-67b8-50ea-b1a3-0d6b35d3e951", 00:10:54.639 "is_configured": true, 00:10:54.639 "data_offset": 2048, 00:10:54.639 "data_size": 63488 00:10:54.639 }, 00:10:54.639 { 00:10:54.639 "name": "BaseBdev2", 00:10:54.639 "uuid": "3c48c57c-146a-5015-b64c-a4902038f182", 00:10:54.639 "is_configured": true, 00:10:54.639 "data_offset": 2048, 00:10:54.639 "data_size": 63488 00:10:54.639 }, 00:10:54.639 { 00:10:54.639 "name": "BaseBdev3", 00:10:54.639 "uuid": "dcc95e85-49a8-5ac3-abe9-6957faaa1e2f", 00:10:54.639 "is_configured": true, 00:10:54.639 "data_offset": 2048, 00:10:54.639 "data_size": 63488 00:10:54.639 }, 00:10:54.639 { 00:10:54.639 "name": "BaseBdev4", 00:10:54.639 "uuid": "7726a91f-da7d-518f-a041-dd285122778f", 00:10:54.639 "is_configured": true, 00:10:54.639 "data_offset": 2048, 00:10:54.639 "data_size": 63488 00:10:54.639 } 00:10:54.639 ] 00:10:54.639 }' 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.639 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.208 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.208 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 [2024-11-21 04:57:11.745494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.208 [2024-11-21 04:57:11.745590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.208 [2024-11-21 04:57:11.748343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.208 [2024-11-21 04:57:11.748469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.208 [2024-11-21 04:57:11.748637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.209 [2024-11-21 04:57:11.748713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:55.209 { 00:10:55.209 "results": [ 00:10:55.209 { 00:10:55.209 "job": "raid_bdev1", 00:10:55.209 "core_mask": "0x1", 00:10:55.209 "workload": "randrw", 00:10:55.209 "percentage": 50, 00:10:55.209 "status": "finished", 00:10:55.209 "queue_depth": 1, 00:10:55.209 "io_size": 131072, 00:10:55.209 "runtime": 1.368286, 00:10:55.209 "iops": 11488.095325100161, 00:10:55.209 "mibps": 1436.0119156375201, 00:10:55.209 "io_failed": 0, 00:10:55.209 "io_timeout": 0, 00:10:55.209 "avg_latency_us": 84.46373673447786, 00:10:55.209 "min_latency_us": 23.36419213973799, 00:10:55.209 "max_latency_us": 1659.8637554585152 00:10:55.209 } 00:10:55.209 ], 00:10:55.209 "core_count": 1 00:10:55.209 } 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85914 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85914 ']' 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85914 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85914 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85914' 00:10:55.209 killing process with pid 85914 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85914 00:10:55.209 [2024-11-21 04:57:11.789128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.209 04:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85914 00:10:55.209 [2024-11-21 04:57:11.824687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rDZAtp4m5x 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:55.469 00:10:55.469 real 0m3.285s 00:10:55.469 user 0m4.120s 00:10:55.469 sys 0m0.548s 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.469 ************************************ 00:10:55.469 END TEST raid_read_error_test 00:10:55.469 ************************************ 00:10:55.469 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 04:57:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:55.469 04:57:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.469 04:57:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.469 04:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 ************************************ 00:10:55.469 START TEST raid_write_error_test 00:10:55.469 ************************************ 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rwhl0TZpJD 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86043 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86043 00:10:55.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 86043 ']' 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.469 04:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 [2024-11-21 04:57:12.221508] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:55.729 [2024-11-21 04:57:12.221655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86043 ] 00:10:55.729 [2024-11-21 04:57:12.393692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.729 [2024-11-21 04:57:12.420043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.988 [2024-11-21 04:57:12.463346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.988 [2024-11-21 04:57:12.463386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 BaseBdev1_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 true 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 [2024-11-21 04:57:13.085840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.560 [2024-11-21 04:57:13.085892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.560 [2024-11-21 04:57:13.085912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:56.560 [2024-11-21 04:57:13.085921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.560 [2024-11-21 04:57:13.088128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.560 [2024-11-21 04:57:13.088167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.560 BaseBdev1 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 BaseBdev2_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 true 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 [2024-11-21 04:57:13.126474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:56.560 [2024-11-21 04:57:13.126521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.560 [2024-11-21 04:57:13.126539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:56.560 [2024-11-21 04:57:13.126548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.560 [2024-11-21 04:57:13.128675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.560 [2024-11-21 04:57:13.128728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:56.560 BaseBdev2 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 BaseBdev3_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 true 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 [2024-11-21 04:57:13.167132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:56.560 [2024-11-21 04:57:13.167182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.560 [2024-11-21 04:57:13.167201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:56.560 [2024-11-21 04:57:13.167210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.560 [2024-11-21 04:57:13.169313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.560 [2024-11-21 04:57:13.169407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:56.560 BaseBdev3 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 BaseBdev4_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 true 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 [2024-11-21 04:57:13.217039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:56.560 [2024-11-21 04:57:13.217102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.560 [2024-11-21 04:57:13.217139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:56.560 [2024-11-21 04:57:13.217148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.560 [2024-11-21 04:57:13.219279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.560 [2024-11-21 04:57:13.219313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:56.560 BaseBdev4 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.560 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 [2024-11-21 04:57:13.229065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.560 [2024-11-21 04:57:13.230991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.560 [2024-11-21 04:57:13.231072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.560 [2024-11-21 04:57:13.231134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.560 [2024-11-21 04:57:13.231342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:56.561 [2024-11-21 04:57:13.231354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.561 [2024-11-21 04:57:13.231623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:56.561 [2024-11-21 04:57:13.231805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:56.561 [2024-11-21 04:57:13.231828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:56.561 [2024-11-21 04:57:13.231955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.561 "name": "raid_bdev1", 00:10:56.561 "uuid": "32580ac8-6921-4744-90f7-3fe65bc49170", 00:10:56.561 "strip_size_kb": 0, 00:10:56.561 "state": "online", 00:10:56.561 "raid_level": "raid1", 00:10:56.561 "superblock": true, 00:10:56.561 "num_base_bdevs": 4, 00:10:56.561 "num_base_bdevs_discovered": 4, 00:10:56.561 "num_base_bdevs_operational": 4, 00:10:56.561 "base_bdevs_list": [ 00:10:56.561 { 00:10:56.561 "name": "BaseBdev1", 00:10:56.561 "uuid": "cbf50788-387c-50ab-85dd-764b5e23b17d", 00:10:56.561 "is_configured": true, 00:10:56.561 "data_offset": 2048, 00:10:56.561 "data_size": 63488 00:10:56.561 }, 00:10:56.561 { 00:10:56.561 "name": "BaseBdev2", 00:10:56.561 "uuid": "3bdb00a6-d4d5-5e36-961f-5cfcf2c6a66c", 00:10:56.561 "is_configured": true, 00:10:56.561 "data_offset": 2048, 00:10:56.561 "data_size": 63488 00:10:56.561 }, 00:10:56.561 { 00:10:56.561 "name": "BaseBdev3", 00:10:56.561 "uuid": "59d24b3c-c5a9-5371-8315-8dc0d21c4023", 00:10:56.561 "is_configured": true, 00:10:56.561 "data_offset": 2048, 00:10:56.561 "data_size": 63488 00:10:56.561 }, 00:10:56.561 { 00:10:56.561 "name": "BaseBdev4", 00:10:56.561 "uuid": "6b4811e0-09cc-5b9c-83dd-a08f1e849cd8", 00:10:56.561 "is_configured": true, 00:10:56.561 "data_offset": 2048, 00:10:56.561 "data_size": 63488 00:10:56.561 } 00:10:56.561 ] 00:10:56.561 }' 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.561 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.130 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:57.130 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:57.130 [2024-11-21 04:57:13.752554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.069 [2024-11-21 04:57:14.668005] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:58.069 [2024-11-21 04:57:14.668061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.069 [2024-11-21 04:57:14.668374] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000068a0 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.069 "name": "raid_bdev1", 00:10:58.069 "uuid": "32580ac8-6921-4744-90f7-3fe65bc49170", 00:10:58.069 "strip_size_kb": 0, 00:10:58.069 "state": "online", 00:10:58.069 "raid_level": "raid1", 00:10:58.069 "superblock": true, 00:10:58.069 "num_base_bdevs": 4, 00:10:58.069 "num_base_bdevs_discovered": 3, 00:10:58.069 "num_base_bdevs_operational": 3, 00:10:58.069 "base_bdevs_list": [ 00:10:58.069 { 00:10:58.069 "name": null, 00:10:58.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.069 "is_configured": false, 00:10:58.069 "data_offset": 0, 00:10:58.069 "data_size": 63488 00:10:58.069 }, 00:10:58.069 { 00:10:58.069 "name": "BaseBdev2", 00:10:58.069 "uuid": "3bdb00a6-d4d5-5e36-961f-5cfcf2c6a66c", 00:10:58.069 "is_configured": true, 00:10:58.069 "data_offset": 2048, 00:10:58.069 "data_size": 63488 00:10:58.069 }, 00:10:58.069 { 00:10:58.069 "name": "BaseBdev3", 00:10:58.069 "uuid": "59d24b3c-c5a9-5371-8315-8dc0d21c4023", 00:10:58.069 "is_configured": true, 00:10:58.069 "data_offset": 2048, 00:10:58.069 "data_size": 63488 00:10:58.069 }, 00:10:58.069 { 00:10:58.069 "name": "BaseBdev4", 00:10:58.069 "uuid": "6b4811e0-09cc-5b9c-83dd-a08f1e849cd8", 00:10:58.069 "is_configured": true, 00:10:58.069 "data_offset": 2048, 00:10:58.069 "data_size": 63488 00:10:58.069 } 00:10:58.069 ] 00:10:58.069 }' 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.069 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.638 [2024-11-21 04:57:15.135757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.638 [2024-11-21 04:57:15.135855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.638 [2024-11-21 04:57:15.138352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.638 [2024-11-21 04:57:15.138442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.638 [2024-11-21 04:57:15.138576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.638 [2024-11-21 04:57:15.138637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:58.638 { 00:10:58.638 "results": [ 00:10:58.638 { 00:10:58.638 "job": "raid_bdev1", 00:10:58.638 "core_mask": "0x1", 00:10:58.638 "workload": "randrw", 00:10:58.638 "percentage": 50, 00:10:58.638 "status": "finished", 00:10:58.638 "queue_depth": 1, 00:10:58.638 "io_size": 131072, 00:10:58.638 "runtime": 1.384002, 00:10:58.638 "iops": 12427.727705595802, 00:10:58.638 "mibps": 1553.4659631994753, 00:10:58.638 "io_failed": 0, 00:10:58.638 "io_timeout": 0, 00:10:58.638 "avg_latency_us": 77.89738194373922, 00:10:58.638 "min_latency_us": 23.14061135371179, 00:10:58.638 "max_latency_us": 1430.9170305676855 00:10:58.638 } 00:10:58.638 ], 00:10:58.638 "core_count": 1 00:10:58.638 } 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86043 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 86043 ']' 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 86043 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86043 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86043' 00:10:58.638 killing process with pid 86043 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 86043 00:10:58.638 [2024-11-21 04:57:15.186062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.638 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 86043 00:10:58.638 [2024-11-21 04:57:15.221410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rwhl0TZpJD 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:58.899 ************************************ 00:10:58.899 END TEST raid_write_error_test 00:10:58.899 ************************************ 00:10:58.899 00:10:58.899 real 0m3.327s 00:10:58.899 user 0m4.186s 00:10:58.899 sys 0m0.564s 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.899 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.899 04:57:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:58.899 04:57:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:58.899 04:57:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:58.899 04:57:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:58.899 04:57:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.899 04:57:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.899 ************************************ 00:10:58.899 START TEST raid_rebuild_test 00:10:58.899 ************************************ 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86177 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86177 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86177 ']' 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.899 04:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.899 [2024-11-21 04:57:15.611867] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:58.899 [2024-11-21 04:57:15.612070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:10:58.899 Zero copy mechanism will not be used. 00:10:58.899 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86177 ] 00:10:59.159 [2024-11-21 04:57:15.787058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.159 [2024-11-21 04:57:15.816180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.159 [2024-11-21 04:57:15.858435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.159 [2024-11-21 04:57:15.858557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.728 BaseBdev1_malloc 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.728 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.987 [2024-11-21 04:57:16.460444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:59.987 [2024-11-21 04:57:16.460510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.987 [2024-11-21 04:57:16.460535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.987 [2024-11-21 04:57:16.460553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.987 [2024-11-21 04:57:16.462721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.987 [2024-11-21 04:57:16.462759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.987 BaseBdev1 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.987 BaseBdev2_malloc 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.987 [2024-11-21 04:57:16.488990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:59.987 [2024-11-21 04:57:16.489042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.987 [2024-11-21 04:57:16.489060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.987 [2024-11-21 04:57:16.489069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.987 [2024-11-21 04:57:16.491214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.987 [2024-11-21 04:57:16.491302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.987 BaseBdev2 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.987 spare_malloc 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.987 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.987 spare_delay 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.988 [2024-11-21 04:57:16.529439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:59.988 [2024-11-21 04:57:16.529490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.988 [2024-11-21 04:57:16.529512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:59.988 [2024-11-21 04:57:16.529521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.988 [2024-11-21 04:57:16.531657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.988 [2024-11-21 04:57:16.531692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:59.988 spare 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.988 [2024-11-21 04:57:16.541431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.988 [2024-11-21 04:57:16.543326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.988 [2024-11-21 04:57:16.543404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:59.988 [2024-11-21 04:57:16.543421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:59.988 [2024-11-21 04:57:16.543668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:59.988 [2024-11-21 04:57:16.543787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:59.988 [2024-11-21 04:57:16.543806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:59.988 [2024-11-21 04:57:16.543913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.988 "name": "raid_bdev1", 00:10:59.988 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:10:59.988 "strip_size_kb": 0, 00:10:59.988 "state": "online", 00:10:59.988 "raid_level": "raid1", 00:10:59.988 "superblock": false, 00:10:59.988 "num_base_bdevs": 2, 00:10:59.988 "num_base_bdevs_discovered": 2, 00:10:59.988 "num_base_bdevs_operational": 2, 00:10:59.988 "base_bdevs_list": [ 00:10:59.988 { 00:10:59.988 "name": "BaseBdev1", 00:10:59.988 "uuid": "c230fe03-2072-5fc4-9fb2-4da5e3fa7a80", 00:10:59.988 "is_configured": true, 00:10:59.988 "data_offset": 0, 00:10:59.988 "data_size": 65536 00:10:59.988 }, 00:10:59.988 { 00:10:59.988 "name": "BaseBdev2", 00:10:59.988 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:10:59.988 "is_configured": true, 00:10:59.988 "data_offset": 0, 00:10:59.988 "data_size": 65536 00:10:59.988 } 00:10:59.988 ] 00:10:59.988 }' 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.988 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.557 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:00.557 04:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.557 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.557 04:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.557 [2024-11-21 04:57:16.996948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:00.557 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:00.557 [2024-11-21 04:57:17.264258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:00.557 /dev/nbd0 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.817 1+0 records in 00:11:00.817 1+0 records out 00:11:00.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613433 s, 6.7 MB/s 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:00.817 04:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:05.021 65536+0 records in 00:11:05.021 65536+0 records out 00:11:05.021 33554432 bytes (34 MB, 32 MiB) copied, 4.05662 s, 8.3 MB/s 00:11:05.021 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:05.021 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:05.021 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:05.021 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.021 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:05.022 [2024-11-21 04:57:21.587106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.022 [2024-11-21 04:57:21.619137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.022 "name": "raid_bdev1", 00:11:05.022 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:05.022 "strip_size_kb": 0, 00:11:05.022 "state": "online", 00:11:05.022 "raid_level": "raid1", 00:11:05.022 "superblock": false, 00:11:05.022 "num_base_bdevs": 2, 00:11:05.022 "num_base_bdevs_discovered": 1, 00:11:05.022 "num_base_bdevs_operational": 1, 00:11:05.022 "base_bdevs_list": [ 00:11:05.022 { 00:11:05.022 "name": null, 00:11:05.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.022 "is_configured": false, 00:11:05.022 "data_offset": 0, 00:11:05.022 "data_size": 65536 00:11:05.022 }, 00:11:05.022 { 00:11:05.022 "name": "BaseBdev2", 00:11:05.022 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:05.022 "is_configured": true, 00:11:05.022 "data_offset": 0, 00:11:05.022 "data_size": 65536 00:11:05.022 } 00:11:05.022 ] 00:11:05.022 }' 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.022 04:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 04:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:05.589 04:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.589 04:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-11-21 04:57:22.030449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:05.590 [2024-11-21 04:57:22.043055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:05.590 04:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.590 04:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:05.590 [2024-11-21 04:57:22.045535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.525 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.526 "name": "raid_bdev1", 00:11:06.526 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:06.526 "strip_size_kb": 0, 00:11:06.526 "state": "online", 00:11:06.526 "raid_level": "raid1", 00:11:06.526 "superblock": false, 00:11:06.526 "num_base_bdevs": 2, 00:11:06.526 "num_base_bdevs_discovered": 2, 00:11:06.526 "num_base_bdevs_operational": 2, 00:11:06.526 "process": { 00:11:06.526 "type": "rebuild", 00:11:06.526 "target": "spare", 00:11:06.526 "progress": { 00:11:06.526 "blocks": 20480, 00:11:06.526 "percent": 31 00:11:06.526 } 00:11:06.526 }, 00:11:06.526 "base_bdevs_list": [ 00:11:06.526 { 00:11:06.526 "name": "spare", 00:11:06.526 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:06.526 "is_configured": true, 00:11:06.526 "data_offset": 0, 00:11:06.526 "data_size": 65536 00:11:06.526 }, 00:11:06.526 { 00:11:06.526 "name": "BaseBdev2", 00:11:06.526 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:06.526 "is_configured": true, 00:11:06.526 "data_offset": 0, 00:11:06.526 "data_size": 65536 00:11:06.526 } 00:11:06.526 ] 00:11:06.526 }' 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.526 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.526 [2024-11-21 04:57:23.209035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:06.526 [2024-11-21 04:57:23.250517] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:06.526 [2024-11-21 04:57:23.250570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.526 [2024-11-21 04:57:23.250588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:06.526 [2024-11-21 04:57:23.250595] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.785 "name": "raid_bdev1", 00:11:06.785 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:06.785 "strip_size_kb": 0, 00:11:06.785 "state": "online", 00:11:06.785 "raid_level": "raid1", 00:11:06.785 "superblock": false, 00:11:06.785 "num_base_bdevs": 2, 00:11:06.785 "num_base_bdevs_discovered": 1, 00:11:06.785 "num_base_bdevs_operational": 1, 00:11:06.785 "base_bdevs_list": [ 00:11:06.785 { 00:11:06.785 "name": null, 00:11:06.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.785 "is_configured": false, 00:11:06.785 "data_offset": 0, 00:11:06.785 "data_size": 65536 00:11:06.785 }, 00:11:06.785 { 00:11:06.785 "name": "BaseBdev2", 00:11:06.785 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:06.785 "is_configured": true, 00:11:06.785 "data_offset": 0, 00:11:06.785 "data_size": 65536 00:11:06.785 } 00:11:06.785 ] 00:11:06.785 }' 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.785 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.044 "name": "raid_bdev1", 00:11:07.044 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:07.044 "strip_size_kb": 0, 00:11:07.044 "state": "online", 00:11:07.044 "raid_level": "raid1", 00:11:07.044 "superblock": false, 00:11:07.044 "num_base_bdevs": 2, 00:11:07.044 "num_base_bdevs_discovered": 1, 00:11:07.044 "num_base_bdevs_operational": 1, 00:11:07.044 "base_bdevs_list": [ 00:11:07.044 { 00:11:07.044 "name": null, 00:11:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.044 "is_configured": false, 00:11:07.044 "data_offset": 0, 00:11:07.044 "data_size": 65536 00:11:07.044 }, 00:11:07.044 { 00:11:07.044 "name": "BaseBdev2", 00:11:07.044 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:07.044 "is_configured": true, 00:11:07.044 "data_offset": 0, 00:11:07.044 "data_size": 65536 00:11:07.044 } 00:11:07.044 ] 00:11:07.044 }' 00:11:07.044 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.304 [2024-11-21 04:57:23.834809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:07.304 [2024-11-21 04:57:23.839758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.304 04:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:07.304 [2024-11-21 04:57:23.841701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.242 "name": "raid_bdev1", 00:11:08.242 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:08.242 "strip_size_kb": 0, 00:11:08.242 "state": "online", 00:11:08.242 "raid_level": "raid1", 00:11:08.242 "superblock": false, 00:11:08.242 "num_base_bdevs": 2, 00:11:08.242 "num_base_bdevs_discovered": 2, 00:11:08.242 "num_base_bdevs_operational": 2, 00:11:08.242 "process": { 00:11:08.242 "type": "rebuild", 00:11:08.242 "target": "spare", 00:11:08.242 "progress": { 00:11:08.242 "blocks": 20480, 00:11:08.242 "percent": 31 00:11:08.242 } 00:11:08.242 }, 00:11:08.242 "base_bdevs_list": [ 00:11:08.242 { 00:11:08.242 "name": "spare", 00:11:08.242 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:08.242 "is_configured": true, 00:11:08.242 "data_offset": 0, 00:11:08.242 "data_size": 65536 00:11:08.242 }, 00:11:08.242 { 00:11:08.242 "name": "BaseBdev2", 00:11:08.242 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:08.242 "is_configured": true, 00:11:08.242 "data_offset": 0, 00:11:08.242 "data_size": 65536 00:11:08.242 } 00:11:08.242 ] 00:11:08.242 }' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=296 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.242 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.501 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.501 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.501 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.501 04:57:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.501 04:57:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.501 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.501 "name": "raid_bdev1", 00:11:08.501 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:08.501 "strip_size_kb": 0, 00:11:08.501 "state": "online", 00:11:08.501 "raid_level": "raid1", 00:11:08.501 "superblock": false, 00:11:08.501 "num_base_bdevs": 2, 00:11:08.501 "num_base_bdevs_discovered": 2, 00:11:08.501 "num_base_bdevs_operational": 2, 00:11:08.501 "process": { 00:11:08.501 "type": "rebuild", 00:11:08.501 "target": "spare", 00:11:08.501 "progress": { 00:11:08.501 "blocks": 22528, 00:11:08.501 "percent": 34 00:11:08.501 } 00:11:08.501 }, 00:11:08.501 "base_bdevs_list": [ 00:11:08.501 { 00:11:08.501 "name": "spare", 00:11:08.501 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:08.501 "is_configured": true, 00:11:08.501 "data_offset": 0, 00:11:08.501 "data_size": 65536 00:11:08.501 }, 00:11:08.501 { 00:11:08.501 "name": "BaseBdev2", 00:11:08.501 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:08.501 "is_configured": true, 00:11:08.501 "data_offset": 0, 00:11:08.501 "data_size": 65536 00:11:08.501 } 00:11:08.501 ] 00:11:08.502 }' 00:11:08.502 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.502 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:08.502 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.502 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:08.502 04:57:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.438 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.438 "name": "raid_bdev1", 00:11:09.438 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:09.438 "strip_size_kb": 0, 00:11:09.438 "state": "online", 00:11:09.438 "raid_level": "raid1", 00:11:09.438 "superblock": false, 00:11:09.438 "num_base_bdevs": 2, 00:11:09.438 "num_base_bdevs_discovered": 2, 00:11:09.438 "num_base_bdevs_operational": 2, 00:11:09.438 "process": { 00:11:09.438 "type": "rebuild", 00:11:09.438 "target": "spare", 00:11:09.438 "progress": { 00:11:09.438 "blocks": 45056, 00:11:09.438 "percent": 68 00:11:09.438 } 00:11:09.438 }, 00:11:09.438 "base_bdevs_list": [ 00:11:09.438 { 00:11:09.438 "name": "spare", 00:11:09.438 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:09.438 "is_configured": true, 00:11:09.438 "data_offset": 0, 00:11:09.438 "data_size": 65536 00:11:09.438 }, 00:11:09.438 { 00:11:09.438 "name": "BaseBdev2", 00:11:09.438 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:09.438 "is_configured": true, 00:11:09.438 "data_offset": 0, 00:11:09.438 "data_size": 65536 00:11:09.438 } 00:11:09.438 ] 00:11:09.439 }' 00:11:09.439 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.705 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.705 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.705 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.705 04:57:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:10.656 [2024-11-21 04:57:27.052995] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:10.656 [2024-11-21 04:57:27.053185] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:10.656 [2024-11-21 04:57:27.053254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.656 "name": "raid_bdev1", 00:11:10.656 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:10.656 "strip_size_kb": 0, 00:11:10.656 "state": "online", 00:11:10.656 "raid_level": "raid1", 00:11:10.656 "superblock": false, 00:11:10.656 "num_base_bdevs": 2, 00:11:10.656 "num_base_bdevs_discovered": 2, 00:11:10.656 "num_base_bdevs_operational": 2, 00:11:10.656 "base_bdevs_list": [ 00:11:10.656 { 00:11:10.656 "name": "spare", 00:11:10.656 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:10.656 "is_configured": true, 00:11:10.656 "data_offset": 0, 00:11:10.656 "data_size": 65536 00:11:10.656 }, 00:11:10.656 { 00:11:10.656 "name": "BaseBdev2", 00:11:10.656 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:10.656 "is_configured": true, 00:11:10.656 "data_offset": 0, 00:11:10.656 "data_size": 65536 00:11:10.656 } 00:11:10.656 ] 00:11:10.656 }' 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.656 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.916 "name": "raid_bdev1", 00:11:10.916 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:10.916 "strip_size_kb": 0, 00:11:10.916 "state": "online", 00:11:10.916 "raid_level": "raid1", 00:11:10.916 "superblock": false, 00:11:10.916 "num_base_bdevs": 2, 00:11:10.916 "num_base_bdevs_discovered": 2, 00:11:10.916 "num_base_bdevs_operational": 2, 00:11:10.916 "base_bdevs_list": [ 00:11:10.916 { 00:11:10.916 "name": "spare", 00:11:10.916 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:10.916 "is_configured": true, 00:11:10.916 "data_offset": 0, 00:11:10.916 "data_size": 65536 00:11:10.916 }, 00:11:10.916 { 00:11:10.916 "name": "BaseBdev2", 00:11:10.916 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:10.916 "is_configured": true, 00:11:10.916 "data_offset": 0, 00:11:10.916 "data_size": 65536 00:11:10.916 } 00:11:10.916 ] 00:11:10.916 }' 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.916 "name": "raid_bdev1", 00:11:10.916 "uuid": "6576cc82-feeb-4ae4-b325-2cba2a57e5f6", 00:11:10.916 "strip_size_kb": 0, 00:11:10.916 "state": "online", 00:11:10.916 "raid_level": "raid1", 00:11:10.916 "superblock": false, 00:11:10.916 "num_base_bdevs": 2, 00:11:10.916 "num_base_bdevs_discovered": 2, 00:11:10.916 "num_base_bdevs_operational": 2, 00:11:10.916 "base_bdevs_list": [ 00:11:10.916 { 00:11:10.916 "name": "spare", 00:11:10.916 "uuid": "9d3bdd4c-979d-5527-af16-d377964e2921", 00:11:10.916 "is_configured": true, 00:11:10.916 "data_offset": 0, 00:11:10.916 "data_size": 65536 00:11:10.916 }, 00:11:10.916 { 00:11:10.916 "name": "BaseBdev2", 00:11:10.916 "uuid": "3b5fef3c-d5e1-5c69-ad4e-485ae6524419", 00:11:10.916 "is_configured": true, 00:11:10.916 "data_offset": 0, 00:11:10.916 "data_size": 65536 00:11:10.916 } 00:11:10.916 ] 00:11:10.916 }' 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.916 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.486 [2024-11-21 04:57:27.948478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.486 [2024-11-21 04:57:27.948511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.486 [2024-11-21 04:57:27.948596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.486 [2024-11-21 04:57:27.948664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.486 [2024-11-21 04:57:27.948689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.486 04:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:11.486 /dev/nbd0 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.486 1+0 records in 00:11:11.486 1+0 records out 00:11:11.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287472 s, 14.2 MB/s 00:11:11.486 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.745 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:11.746 /dev/nbd1 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.746 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.746 1+0 records in 00:11:11.746 1+0 records out 00:11:11.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402055 s, 10.2 MB/s 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.004 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.005 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.264 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86177 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86177 ']' 00:11:12.523 04:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86177 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86177 00:11:12.523 killing process with pid 86177 00:11:12.523 Received shutdown signal, test time was about 60.000000 seconds 00:11:12.523 00:11:12.523 Latency(us) 00:11:12.523 [2024-11-21T04:57:29.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.523 [2024-11-21T04:57:29.258Z] =================================================================================================================== 00:11:12.523 [2024-11-21T04:57:29.258Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86177' 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 86177 00:11:12.523 [2024-11-21 04:57:29.039613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.523 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 86177 00:11:12.523 [2024-11-21 04:57:29.071341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:12.782 00:11:12.782 real 0m13.760s 00:11:12.782 user 0m15.526s 00:11:12.782 sys 0m2.868s 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.782 ************************************ 00:11:12.782 END TEST raid_rebuild_test 00:11:12.782 ************************************ 00:11:12.782 04:57:29 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:12.782 04:57:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:12.782 04:57:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.782 04:57:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.782 ************************************ 00:11:12.782 START TEST raid_rebuild_test_sb 00:11:12.782 ************************************ 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86578 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86578 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86578 ']' 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.782 04:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.782 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:12.782 Zero copy mechanism will not be used. 00:11:12.782 [2024-11-21 04:57:29.442475] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:11:12.782 [2024-11-21 04:57:29.442601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86578 ] 00:11:13.041 [2024-11-21 04:57:29.613995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.041 [2024-11-21 04:57:29.641799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.041 [2024-11-21 04:57:29.683980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.041 [2024-11-21 04:57:29.684029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.609 BaseBdev1_malloc 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.609 [2024-11-21 04:57:30.305951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:13.609 [2024-11-21 04:57:30.306037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.609 [2024-11-21 04:57:30.306075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:13.609 [2024-11-21 04:57:30.306099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.609 [2024-11-21 04:57:30.308337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.609 [2024-11-21 04:57:30.308382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.609 BaseBdev1 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.609 BaseBdev2_malloc 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.609 [2024-11-21 04:57:30.334703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:13.609 [2024-11-21 04:57:30.334780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.609 [2024-11-21 04:57:30.334804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:13.609 [2024-11-21 04:57:30.334813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.609 [2024-11-21 04:57:30.336929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.609 [2024-11-21 04:57:30.336966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.609 BaseBdev2 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.609 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.868 spare_malloc 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.868 spare_delay 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.868 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.869 [2024-11-21 04:57:30.375594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:13.869 [2024-11-21 04:57:30.375666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.869 [2024-11-21 04:57:30.375693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:13.869 [2024-11-21 04:57:30.375702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.869 [2024-11-21 04:57:30.377974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.869 [2024-11-21 04:57:30.378012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:13.869 spare 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.869 [2024-11-21 04:57:30.387617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.869 [2024-11-21 04:57:30.389525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.869 [2024-11-21 04:57:30.389697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:13.869 [2024-11-21 04:57:30.389710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.869 [2024-11-21 04:57:30.390037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:13.869 [2024-11-21 04:57:30.390221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:13.869 [2024-11-21 04:57:30.390254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:13.869 [2024-11-21 04:57:30.390437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.869 "name": "raid_bdev1", 00:11:13.869 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:13.869 "strip_size_kb": 0, 00:11:13.869 "state": "online", 00:11:13.869 "raid_level": "raid1", 00:11:13.869 "superblock": true, 00:11:13.869 "num_base_bdevs": 2, 00:11:13.869 "num_base_bdevs_discovered": 2, 00:11:13.869 "num_base_bdevs_operational": 2, 00:11:13.869 "base_bdevs_list": [ 00:11:13.869 { 00:11:13.869 "name": "BaseBdev1", 00:11:13.869 "uuid": "e3a598c1-abda-53d1-81a6-641518cddc8a", 00:11:13.869 "is_configured": true, 00:11:13.869 "data_offset": 2048, 00:11:13.869 "data_size": 63488 00:11:13.869 }, 00:11:13.869 { 00:11:13.869 "name": "BaseBdev2", 00:11:13.869 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:13.869 "is_configured": true, 00:11:13.869 "data_offset": 2048, 00:11:13.869 "data_size": 63488 00:11:13.869 } 00:11:13.869 ] 00:11:13.869 }' 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.869 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.128 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.128 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.128 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:14.128 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.387 [2024-11-21 04:57:30.867029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:14.387 04:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:14.646 [2024-11-21 04:57:31.130330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:14.646 /dev/nbd0 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.646 1+0 records in 00:11:14.646 1+0 records out 00:11:14.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430986 s, 9.5 MB/s 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:14.646 04:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:18.839 63488+0 records in 00:11:18.839 63488+0 records out 00:11:18.839 32505856 bytes (33 MB, 31 MiB) copied, 3.59857 s, 9.0 MB/s 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:18.839 [2024-11-21 04:57:34.986105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.839 04:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.839 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 [2024-11-21 04:57:35.018117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.840 "name": "raid_bdev1", 00:11:18.840 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:18.840 "strip_size_kb": 0, 00:11:18.840 "state": "online", 00:11:18.840 "raid_level": "raid1", 00:11:18.840 "superblock": true, 00:11:18.840 "num_base_bdevs": 2, 00:11:18.840 "num_base_bdevs_discovered": 1, 00:11:18.840 "num_base_bdevs_operational": 1, 00:11:18.840 "base_bdevs_list": [ 00:11:18.840 { 00:11:18.840 "name": null, 00:11:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.840 "is_configured": false, 00:11:18.840 "data_offset": 0, 00:11:18.840 "data_size": 63488 00:11:18.840 }, 00:11:18.840 { 00:11:18.840 "name": "BaseBdev2", 00:11:18.840 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:18.840 "is_configured": true, 00:11:18.840 "data_offset": 2048, 00:11:18.840 "data_size": 63488 00:11:18.840 } 00:11:18.840 ] 00:11:18.840 }' 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 [2024-11-21 04:57:35.497297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:18.840 [2024-11-21 04:57:35.512843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 04:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:18.840 [2024-11-21 04:57:35.518935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.217 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.217 "name": "raid_bdev1", 00:11:20.217 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:20.217 "strip_size_kb": 0, 00:11:20.217 "state": "online", 00:11:20.217 "raid_level": "raid1", 00:11:20.217 "superblock": true, 00:11:20.217 "num_base_bdevs": 2, 00:11:20.217 "num_base_bdevs_discovered": 2, 00:11:20.218 "num_base_bdevs_operational": 2, 00:11:20.218 "process": { 00:11:20.218 "type": "rebuild", 00:11:20.218 "target": "spare", 00:11:20.218 "progress": { 00:11:20.218 "blocks": 20480, 00:11:20.218 "percent": 32 00:11:20.218 } 00:11:20.218 }, 00:11:20.218 "base_bdevs_list": [ 00:11:20.218 { 00:11:20.218 "name": "spare", 00:11:20.218 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:20.218 "is_configured": true, 00:11:20.218 "data_offset": 2048, 00:11:20.218 "data_size": 63488 00:11:20.218 }, 00:11:20.218 { 00:11:20.218 "name": "BaseBdev2", 00:11:20.218 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:20.218 "is_configured": true, 00:11:20.218 "data_offset": 2048, 00:11:20.218 "data_size": 63488 00:11:20.218 } 00:11:20.218 ] 00:11:20.218 }' 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.218 [2024-11-21 04:57:36.653717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.218 [2024-11-21 04:57:36.724204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:20.218 [2024-11-21 04:57:36.724274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.218 [2024-11-21 04:57:36.724292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.218 [2024-11-21 04:57:36.724299] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.218 "name": "raid_bdev1", 00:11:20.218 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:20.218 "strip_size_kb": 0, 00:11:20.218 "state": "online", 00:11:20.218 "raid_level": "raid1", 00:11:20.218 "superblock": true, 00:11:20.218 "num_base_bdevs": 2, 00:11:20.218 "num_base_bdevs_discovered": 1, 00:11:20.218 "num_base_bdevs_operational": 1, 00:11:20.218 "base_bdevs_list": [ 00:11:20.218 { 00:11:20.218 "name": null, 00:11:20.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.218 "is_configured": false, 00:11:20.218 "data_offset": 0, 00:11:20.218 "data_size": 63488 00:11:20.218 }, 00:11:20.218 { 00:11:20.218 "name": "BaseBdev2", 00:11:20.218 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:20.218 "is_configured": true, 00:11:20.218 "data_offset": 2048, 00:11:20.218 "data_size": 63488 00:11:20.218 } 00:11:20.218 ] 00:11:20.218 }' 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.218 04:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.477 "name": "raid_bdev1", 00:11:20.477 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:20.477 "strip_size_kb": 0, 00:11:20.477 "state": "online", 00:11:20.477 "raid_level": "raid1", 00:11:20.477 "superblock": true, 00:11:20.477 "num_base_bdevs": 2, 00:11:20.477 "num_base_bdevs_discovered": 1, 00:11:20.477 "num_base_bdevs_operational": 1, 00:11:20.477 "base_bdevs_list": [ 00:11:20.477 { 00:11:20.477 "name": null, 00:11:20.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.477 "is_configured": false, 00:11:20.477 "data_offset": 0, 00:11:20.477 "data_size": 63488 00:11:20.477 }, 00:11:20.477 { 00:11:20.477 "name": "BaseBdev2", 00:11:20.477 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:20.477 "is_configured": true, 00:11:20.477 "data_offset": 2048, 00:11:20.477 "data_size": 63488 00:11:20.477 } 00:11:20.477 ] 00:11:20.477 }' 00:11:20.477 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.736 [2024-11-21 04:57:37.276366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:20.736 [2024-11-21 04:57:37.281106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.736 04:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:20.736 [2024-11-21 04:57:37.282920] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.675 "name": "raid_bdev1", 00:11:21.675 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:21.675 "strip_size_kb": 0, 00:11:21.675 "state": "online", 00:11:21.675 "raid_level": "raid1", 00:11:21.675 "superblock": true, 00:11:21.675 "num_base_bdevs": 2, 00:11:21.675 "num_base_bdevs_discovered": 2, 00:11:21.675 "num_base_bdevs_operational": 2, 00:11:21.675 "process": { 00:11:21.675 "type": "rebuild", 00:11:21.675 "target": "spare", 00:11:21.675 "progress": { 00:11:21.675 "blocks": 20480, 00:11:21.675 "percent": 32 00:11:21.675 } 00:11:21.675 }, 00:11:21.675 "base_bdevs_list": [ 00:11:21.675 { 00:11:21.675 "name": "spare", 00:11:21.675 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:21.675 "is_configured": true, 00:11:21.675 "data_offset": 2048, 00:11:21.675 "data_size": 63488 00:11:21.675 }, 00:11:21.675 { 00:11:21.675 "name": "BaseBdev2", 00:11:21.675 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:21.675 "is_configured": true, 00:11:21.675 "data_offset": 2048, 00:11:21.675 "data_size": 63488 00:11:21.675 } 00:11:21.675 ] 00:11:21.675 }' 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:21.675 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:21.934 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=310 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.934 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.934 "name": "raid_bdev1", 00:11:21.934 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:21.934 "strip_size_kb": 0, 00:11:21.934 "state": "online", 00:11:21.934 "raid_level": "raid1", 00:11:21.934 "superblock": true, 00:11:21.935 "num_base_bdevs": 2, 00:11:21.935 "num_base_bdevs_discovered": 2, 00:11:21.935 "num_base_bdevs_operational": 2, 00:11:21.935 "process": { 00:11:21.935 "type": "rebuild", 00:11:21.935 "target": "spare", 00:11:21.935 "progress": { 00:11:21.935 "blocks": 22528, 00:11:21.935 "percent": 35 00:11:21.935 } 00:11:21.935 }, 00:11:21.935 "base_bdevs_list": [ 00:11:21.935 { 00:11:21.935 "name": "spare", 00:11:21.935 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:21.935 "is_configured": true, 00:11:21.935 "data_offset": 2048, 00:11:21.935 "data_size": 63488 00:11:21.935 }, 00:11:21.935 { 00:11:21.935 "name": "BaseBdev2", 00:11:21.935 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:21.935 "is_configured": true, 00:11:21.935 "data_offset": 2048, 00:11:21.935 "data_size": 63488 00:11:21.935 } 00:11:21.935 ] 00:11:21.935 }' 00:11:21.935 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.935 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:21.935 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.935 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.935 04:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.871 "name": "raid_bdev1", 00:11:22.871 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:22.871 "strip_size_kb": 0, 00:11:22.871 "state": "online", 00:11:22.871 "raid_level": "raid1", 00:11:22.871 "superblock": true, 00:11:22.871 "num_base_bdevs": 2, 00:11:22.871 "num_base_bdevs_discovered": 2, 00:11:22.871 "num_base_bdevs_operational": 2, 00:11:22.871 "process": { 00:11:22.871 "type": "rebuild", 00:11:22.871 "target": "spare", 00:11:22.871 "progress": { 00:11:22.871 "blocks": 45056, 00:11:22.871 "percent": 70 00:11:22.871 } 00:11:22.871 }, 00:11:22.871 "base_bdevs_list": [ 00:11:22.871 { 00:11:22.871 "name": "spare", 00:11:22.871 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:22.871 "is_configured": true, 00:11:22.871 "data_offset": 2048, 00:11:22.871 "data_size": 63488 00:11:22.871 }, 00:11:22.871 { 00:11:22.871 "name": "BaseBdev2", 00:11:22.871 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:22.871 "is_configured": true, 00:11:22.871 "data_offset": 2048, 00:11:22.871 "data_size": 63488 00:11:22.871 } 00:11:22.871 ] 00:11:22.871 }' 00:11:22.871 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.130 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.130 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.130 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.130 04:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:23.697 [2024-11-21 04:57:40.393349] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:23.697 [2024-11-21 04:57:40.393449] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:23.697 [2024-11-21 04:57:40.393534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.956 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.216 "name": "raid_bdev1", 00:11:24.216 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:24.216 "strip_size_kb": 0, 00:11:24.216 "state": "online", 00:11:24.216 "raid_level": "raid1", 00:11:24.216 "superblock": true, 00:11:24.216 "num_base_bdevs": 2, 00:11:24.216 "num_base_bdevs_discovered": 2, 00:11:24.216 "num_base_bdevs_operational": 2, 00:11:24.216 "base_bdevs_list": [ 00:11:24.216 { 00:11:24.216 "name": "spare", 00:11:24.216 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:24.216 "is_configured": true, 00:11:24.216 "data_offset": 2048, 00:11:24.216 "data_size": 63488 00:11:24.216 }, 00:11:24.216 { 00:11:24.216 "name": "BaseBdev2", 00:11:24.216 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:24.216 "is_configured": true, 00:11:24.216 "data_offset": 2048, 00:11:24.216 "data_size": 63488 00:11:24.216 } 00:11:24.216 ] 00:11:24.216 }' 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.216 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.217 "name": "raid_bdev1", 00:11:24.217 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:24.217 "strip_size_kb": 0, 00:11:24.217 "state": "online", 00:11:24.217 "raid_level": "raid1", 00:11:24.217 "superblock": true, 00:11:24.217 "num_base_bdevs": 2, 00:11:24.217 "num_base_bdevs_discovered": 2, 00:11:24.217 "num_base_bdevs_operational": 2, 00:11:24.217 "base_bdevs_list": [ 00:11:24.217 { 00:11:24.217 "name": "spare", 00:11:24.217 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:24.217 "is_configured": true, 00:11:24.217 "data_offset": 2048, 00:11:24.217 "data_size": 63488 00:11:24.217 }, 00:11:24.217 { 00:11:24.217 "name": "BaseBdev2", 00:11:24.217 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:24.217 "is_configured": true, 00:11:24.217 "data_offset": 2048, 00:11:24.217 "data_size": 63488 00:11:24.217 } 00:11:24.217 ] 00:11:24.217 }' 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.217 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.479 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.479 "name": "raid_bdev1", 00:11:24.479 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:24.479 "strip_size_kb": 0, 00:11:24.479 "state": "online", 00:11:24.479 "raid_level": "raid1", 00:11:24.479 "superblock": true, 00:11:24.479 "num_base_bdevs": 2, 00:11:24.479 "num_base_bdevs_discovered": 2, 00:11:24.479 "num_base_bdevs_operational": 2, 00:11:24.479 "base_bdevs_list": [ 00:11:24.479 { 00:11:24.479 "name": "spare", 00:11:24.479 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:24.479 "is_configured": true, 00:11:24.479 "data_offset": 2048, 00:11:24.479 "data_size": 63488 00:11:24.479 }, 00:11:24.479 { 00:11:24.479 "name": "BaseBdev2", 00:11:24.479 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:24.479 "is_configured": true, 00:11:24.479 "data_offset": 2048, 00:11:24.479 "data_size": 63488 00:11:24.479 } 00:11:24.479 ] 00:11:24.479 }' 00:11:24.479 04:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.479 04:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.750 [2024-11-21 04:57:41.336480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:24.750 [2024-11-21 04:57:41.336510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.750 [2024-11-21 04:57:41.336599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.750 [2024-11-21 04:57:41.336672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.750 [2024-11-21 04:57:41.336686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.750 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:25.021 /dev/nbd0 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.022 1+0 records in 00:11:25.022 1+0 records out 00:11:25.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242076 s, 16.9 MB/s 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.022 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:25.281 /dev/nbd1 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.281 1+0 records in 00:11:25.281 1+0 records out 00:11:25.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047576 s, 8.6 MB/s 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.281 04:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.540 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 [2024-11-21 04:57:42.368495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:25.799 [2024-11-21 04:57:42.368562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.799 [2024-11-21 04:57:42.368596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.799 [2024-11-21 04:57:42.368609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.799 [2024-11-21 04:57:42.370749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.799 [2024-11-21 04:57:42.370788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:25.799 [2024-11-21 04:57:42.370884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:25.799 [2024-11-21 04:57:42.370930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.799 [2024-11-21 04:57:42.371039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.799 spare 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 [2024-11-21 04:57:42.470950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:25.799 [2024-11-21 04:57:42.470977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.799 [2024-11-21 04:57:42.471264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:25.799 [2024-11-21 04:57:42.471420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:25.799 [2024-11-21 04:57:42.471439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:25.799 [2024-11-21 04:57:42.471603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.799 "name": "raid_bdev1", 00:11:25.799 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:25.799 "strip_size_kb": 0, 00:11:25.799 "state": "online", 00:11:25.799 "raid_level": "raid1", 00:11:25.799 "superblock": true, 00:11:25.799 "num_base_bdevs": 2, 00:11:25.799 "num_base_bdevs_discovered": 2, 00:11:25.799 "num_base_bdevs_operational": 2, 00:11:25.799 "base_bdevs_list": [ 00:11:25.799 { 00:11:25.799 "name": "spare", 00:11:25.799 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:25.799 "is_configured": true, 00:11:25.799 "data_offset": 2048, 00:11:25.799 "data_size": 63488 00:11:25.799 }, 00:11:25.799 { 00:11:25.799 "name": "BaseBdev2", 00:11:25.799 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:25.799 "is_configured": true, 00:11:25.799 "data_offset": 2048, 00:11:25.799 "data_size": 63488 00:11:25.799 } 00:11:25.799 ] 00:11:25.799 }' 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.799 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.368 "name": "raid_bdev1", 00:11:26.368 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:26.368 "strip_size_kb": 0, 00:11:26.368 "state": "online", 00:11:26.368 "raid_level": "raid1", 00:11:26.368 "superblock": true, 00:11:26.368 "num_base_bdevs": 2, 00:11:26.368 "num_base_bdevs_discovered": 2, 00:11:26.368 "num_base_bdevs_operational": 2, 00:11:26.368 "base_bdevs_list": [ 00:11:26.368 { 00:11:26.368 "name": "spare", 00:11:26.368 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 }, 00:11:26.368 { 00:11:26.368 "name": "BaseBdev2", 00:11:26.368 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 } 00:11:26.368 ] 00:11:26.368 }' 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.368 04:57:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.368 [2024-11-21 04:57:43.043392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.368 "name": "raid_bdev1", 00:11:26.368 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:26.368 "strip_size_kb": 0, 00:11:26.368 "state": "online", 00:11:26.368 "raid_level": "raid1", 00:11:26.368 "superblock": true, 00:11:26.368 "num_base_bdevs": 2, 00:11:26.368 "num_base_bdevs_discovered": 1, 00:11:26.368 "num_base_bdevs_operational": 1, 00:11:26.368 "base_bdevs_list": [ 00:11:26.368 { 00:11:26.368 "name": null, 00:11:26.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.368 "is_configured": false, 00:11:26.368 "data_offset": 0, 00:11:26.368 "data_size": 63488 00:11:26.368 }, 00:11:26.368 { 00:11:26.368 "name": "BaseBdev2", 00:11:26.368 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 } 00:11:26.368 ] 00:11:26.368 }' 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.368 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.937 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.937 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.937 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.937 [2024-11-21 04:57:43.490735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.937 [2024-11-21 04:57:43.490926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:26.937 [2024-11-21 04:57:43.490940] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:26.937 [2024-11-21 04:57:43.490987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.937 [2024-11-21 04:57:43.495727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:26.937 04:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.937 04:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:26.937 [2024-11-21 04:57:43.497653] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.874 "name": "raid_bdev1", 00:11:27.874 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:27.874 "strip_size_kb": 0, 00:11:27.874 "state": "online", 00:11:27.874 "raid_level": "raid1", 00:11:27.874 "superblock": true, 00:11:27.874 "num_base_bdevs": 2, 00:11:27.874 "num_base_bdevs_discovered": 2, 00:11:27.874 "num_base_bdevs_operational": 2, 00:11:27.874 "process": { 00:11:27.874 "type": "rebuild", 00:11:27.874 "target": "spare", 00:11:27.874 "progress": { 00:11:27.874 "blocks": 20480, 00:11:27.874 "percent": 32 00:11:27.874 } 00:11:27.874 }, 00:11:27.874 "base_bdevs_list": [ 00:11:27.874 { 00:11:27.874 "name": "spare", 00:11:27.874 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:27.874 "is_configured": true, 00:11:27.874 "data_offset": 2048, 00:11:27.874 "data_size": 63488 00:11:27.874 }, 00:11:27.874 { 00:11:27.874 "name": "BaseBdev2", 00:11:27.874 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:27.874 "is_configured": true, 00:11:27.874 "data_offset": 2048, 00:11:27.874 "data_size": 63488 00:11:27.874 } 00:11:27.874 ] 00:11:27.874 }' 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.874 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.133 [2024-11-21 04:57:44.653896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.133 [2024-11-21 04:57:44.701717] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:28.133 [2024-11-21 04:57:44.701780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.133 [2024-11-21 04:57:44.701796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.133 [2024-11-21 04:57:44.701802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.133 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.133 "name": "raid_bdev1", 00:11:28.133 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:28.133 "strip_size_kb": 0, 00:11:28.133 "state": "online", 00:11:28.133 "raid_level": "raid1", 00:11:28.133 "superblock": true, 00:11:28.133 "num_base_bdevs": 2, 00:11:28.133 "num_base_bdevs_discovered": 1, 00:11:28.133 "num_base_bdevs_operational": 1, 00:11:28.133 "base_bdevs_list": [ 00:11:28.133 { 00:11:28.133 "name": null, 00:11:28.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.133 "is_configured": false, 00:11:28.133 "data_offset": 0, 00:11:28.133 "data_size": 63488 00:11:28.133 }, 00:11:28.133 { 00:11:28.133 "name": "BaseBdev2", 00:11:28.133 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:28.133 "is_configured": true, 00:11:28.133 "data_offset": 2048, 00:11:28.133 "data_size": 63488 00:11:28.133 } 00:11:28.134 ] 00:11:28.134 }' 00:11:28.134 04:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.134 04:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 04:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:28.701 04:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.701 04:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 [2024-11-21 04:57:45.149644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:28.701 [2024-11-21 04:57:45.149705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.701 [2024-11-21 04:57:45.149729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:28.701 [2024-11-21 04:57:45.149737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.701 [2024-11-21 04:57:45.150259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.701 [2024-11-21 04:57:45.150292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:28.701 [2024-11-21 04:57:45.150389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:28.701 [2024-11-21 04:57:45.150427] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:28.701 [2024-11-21 04:57:45.150443] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:28.701 [2024-11-21 04:57:45.150471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.701 spare 00:11:28.701 [2024-11-21 04:57:45.155249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:28.701 04:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.701 04:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:28.701 [2024-11-21 04:57:45.157293] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.638 "name": "raid_bdev1", 00:11:29.638 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:29.638 "strip_size_kb": 0, 00:11:29.638 "state": "online", 00:11:29.638 "raid_level": "raid1", 00:11:29.638 "superblock": true, 00:11:29.638 "num_base_bdevs": 2, 00:11:29.638 "num_base_bdevs_discovered": 2, 00:11:29.638 "num_base_bdevs_operational": 2, 00:11:29.638 "process": { 00:11:29.638 "type": "rebuild", 00:11:29.638 "target": "spare", 00:11:29.638 "progress": { 00:11:29.638 "blocks": 20480, 00:11:29.638 "percent": 32 00:11:29.638 } 00:11:29.638 }, 00:11:29.638 "base_bdevs_list": [ 00:11:29.638 { 00:11:29.638 "name": "spare", 00:11:29.638 "uuid": "377ffc7d-b87d-5d82-b0d6-9f06343d2ac6", 00:11:29.638 "is_configured": true, 00:11:29.638 "data_offset": 2048, 00:11:29.638 "data_size": 63488 00:11:29.638 }, 00:11:29.638 { 00:11:29.638 "name": "BaseBdev2", 00:11:29.638 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:29.638 "is_configured": true, 00:11:29.638 "data_offset": 2048, 00:11:29.638 "data_size": 63488 00:11:29.638 } 00:11:29.638 ] 00:11:29.638 }' 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.638 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.638 [2024-11-21 04:57:46.285471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.638 [2024-11-21 04:57:46.361255] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:29.638 [2024-11-21 04:57:46.361329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.638 [2024-11-21 04:57:46.361344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.638 [2024-11-21 04:57:46.361352] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.897 "name": "raid_bdev1", 00:11:29.897 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:29.897 "strip_size_kb": 0, 00:11:29.897 "state": "online", 00:11:29.897 "raid_level": "raid1", 00:11:29.897 "superblock": true, 00:11:29.897 "num_base_bdevs": 2, 00:11:29.897 "num_base_bdevs_discovered": 1, 00:11:29.897 "num_base_bdevs_operational": 1, 00:11:29.897 "base_bdevs_list": [ 00:11:29.897 { 00:11:29.897 "name": null, 00:11:29.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.897 "is_configured": false, 00:11:29.897 "data_offset": 0, 00:11:29.897 "data_size": 63488 00:11:29.897 }, 00:11:29.897 { 00:11:29.897 "name": "BaseBdev2", 00:11:29.897 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:29.897 "is_configured": true, 00:11:29.897 "data_offset": 2048, 00:11:29.897 "data_size": 63488 00:11:29.897 } 00:11:29.897 ] 00:11:29.897 }' 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.897 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.156 "name": "raid_bdev1", 00:11:30.156 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:30.156 "strip_size_kb": 0, 00:11:30.156 "state": "online", 00:11:30.156 "raid_level": "raid1", 00:11:30.156 "superblock": true, 00:11:30.156 "num_base_bdevs": 2, 00:11:30.156 "num_base_bdevs_discovered": 1, 00:11:30.156 "num_base_bdevs_operational": 1, 00:11:30.156 "base_bdevs_list": [ 00:11:30.156 { 00:11:30.156 "name": null, 00:11:30.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.156 "is_configured": false, 00:11:30.156 "data_offset": 0, 00:11:30.156 "data_size": 63488 00:11:30.156 }, 00:11:30.156 { 00:11:30.156 "name": "BaseBdev2", 00:11:30.156 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:30.156 "is_configured": true, 00:11:30.156 "data_offset": 2048, 00:11:30.156 "data_size": 63488 00:11:30.156 } 00:11:30.156 ] 00:11:30.156 }' 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:30.156 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.415 [2024-11-21 04:57:46.941028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.415 [2024-11-21 04:57:46.941082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.415 [2024-11-21 04:57:46.941111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:30.415 [2024-11-21 04:57:46.941122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.415 [2024-11-21 04:57:46.941530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.415 [2024-11-21 04:57:46.941559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.415 [2024-11-21 04:57:46.941629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:30.415 [2024-11-21 04:57:46.941648] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:30.415 [2024-11-21 04:57:46.941658] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:30.415 [2024-11-21 04:57:46.941669] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:30.415 BaseBdev1 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.415 04:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.363 04:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.363 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.363 "name": "raid_bdev1", 00:11:31.363 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:31.363 "strip_size_kb": 0, 00:11:31.363 "state": "online", 00:11:31.363 "raid_level": "raid1", 00:11:31.363 "superblock": true, 00:11:31.363 "num_base_bdevs": 2, 00:11:31.363 "num_base_bdevs_discovered": 1, 00:11:31.363 "num_base_bdevs_operational": 1, 00:11:31.363 "base_bdevs_list": [ 00:11:31.363 { 00:11:31.363 "name": null, 00:11:31.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.363 "is_configured": false, 00:11:31.363 "data_offset": 0, 00:11:31.363 "data_size": 63488 00:11:31.363 }, 00:11:31.363 { 00:11:31.363 "name": "BaseBdev2", 00:11:31.363 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:31.363 "is_configured": true, 00:11:31.363 "data_offset": 2048, 00:11:31.363 "data_size": 63488 00:11:31.363 } 00:11:31.363 ] 00:11:31.363 }' 00:11:31.363 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.363 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.945 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.946 "name": "raid_bdev1", 00:11:31.946 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:31.946 "strip_size_kb": 0, 00:11:31.946 "state": "online", 00:11:31.946 "raid_level": "raid1", 00:11:31.946 "superblock": true, 00:11:31.946 "num_base_bdevs": 2, 00:11:31.946 "num_base_bdevs_discovered": 1, 00:11:31.946 "num_base_bdevs_operational": 1, 00:11:31.946 "base_bdevs_list": [ 00:11:31.946 { 00:11:31.946 "name": null, 00:11:31.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.946 "is_configured": false, 00:11:31.946 "data_offset": 0, 00:11:31.946 "data_size": 63488 00:11:31.946 }, 00:11:31.946 { 00:11:31.946 "name": "BaseBdev2", 00:11:31.946 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:31.946 "is_configured": true, 00:11:31.946 "data_offset": 2048, 00:11:31.946 "data_size": 63488 00:11:31.946 } 00:11:31.946 ] 00:11:31.946 }' 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.946 [2024-11-21 04:57:48.562400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.946 [2024-11-21 04:57:48.562558] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:31.946 [2024-11-21 04:57:48.562570] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:31.946 request: 00:11:31.946 { 00:11:31.946 "base_bdev": "BaseBdev1", 00:11:31.946 "raid_bdev": "raid_bdev1", 00:11:31.946 "method": "bdev_raid_add_base_bdev", 00:11:31.946 "req_id": 1 00:11:31.946 } 00:11:31.946 Got JSON-RPC error response 00:11:31.946 response: 00:11:31.946 { 00:11:31.946 "code": -22, 00:11:31.946 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:31.946 } 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.946 04:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:32.883 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.883 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.883 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.883 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.883 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.884 04:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.144 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.144 "name": "raid_bdev1", 00:11:33.144 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:33.144 "strip_size_kb": 0, 00:11:33.144 "state": "online", 00:11:33.144 "raid_level": "raid1", 00:11:33.144 "superblock": true, 00:11:33.144 "num_base_bdevs": 2, 00:11:33.144 "num_base_bdevs_discovered": 1, 00:11:33.144 "num_base_bdevs_operational": 1, 00:11:33.144 "base_bdevs_list": [ 00:11:33.144 { 00:11:33.144 "name": null, 00:11:33.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.144 "is_configured": false, 00:11:33.144 "data_offset": 0, 00:11:33.144 "data_size": 63488 00:11:33.144 }, 00:11:33.144 { 00:11:33.144 "name": "BaseBdev2", 00:11:33.144 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:33.144 "is_configured": true, 00:11:33.144 "data_offset": 2048, 00:11:33.144 "data_size": 63488 00:11:33.144 } 00:11:33.144 ] 00:11:33.144 }' 00:11:33.144 04:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.144 04:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.403 "name": "raid_bdev1", 00:11:33.403 "uuid": "20994212-3dcd-444c-a4b5-74c2871f6a5a", 00:11:33.403 "strip_size_kb": 0, 00:11:33.403 "state": "online", 00:11:33.403 "raid_level": "raid1", 00:11:33.403 "superblock": true, 00:11:33.403 "num_base_bdevs": 2, 00:11:33.403 "num_base_bdevs_discovered": 1, 00:11:33.403 "num_base_bdevs_operational": 1, 00:11:33.403 "base_bdevs_list": [ 00:11:33.403 { 00:11:33.403 "name": null, 00:11:33.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.403 "is_configured": false, 00:11:33.403 "data_offset": 0, 00:11:33.403 "data_size": 63488 00:11:33.403 }, 00:11:33.403 { 00:11:33.403 "name": "BaseBdev2", 00:11:33.403 "uuid": "bc20a81a-8054-589d-bbaf-5850d6d4958f", 00:11:33.403 "is_configured": true, 00:11:33.403 "data_offset": 2048, 00:11:33.403 "data_size": 63488 00:11:33.403 } 00:11:33.403 ] 00:11:33.403 }' 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.403 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86578 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86578 ']' 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86578 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86578 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.664 killing process with pid 86578 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86578' 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86578 00:11:33.664 Received shutdown signal, test time was about 60.000000 seconds 00:11:33.664 00:11:33.664 Latency(us) 00:11:33.664 [2024-11-21T04:57:50.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.664 [2024-11-21T04:57:50.399Z] =================================================================================================================== 00:11:33.664 [2024-11-21T04:57:50.399Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:33.664 [2024-11-21 04:57:50.168098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.664 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86578 00:11:33.664 [2024-11-21 04:57:50.168262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.664 [2024-11-21 04:57:50.168328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.664 [2024-11-21 04:57:50.168337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:33.664 [2024-11-21 04:57:50.200119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:33.923 00:11:33.923 real 0m21.054s 00:11:33.923 user 0m26.013s 00:11:33.923 sys 0m3.464s 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.923 ************************************ 00:11:33.923 END TEST raid_rebuild_test_sb 00:11:33.923 ************************************ 00:11:33.923 04:57:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:33.923 04:57:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:33.923 04:57:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.923 04:57:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.923 ************************************ 00:11:33.923 START TEST raid_rebuild_test_io 00:11:33.923 ************************************ 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.923 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87294 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87294 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 87294 ']' 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.924 04:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.924 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:33.924 Zero copy mechanism will not be used. 00:11:33.924 [2024-11-21 04:57:50.577325] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:11:33.924 [2024-11-21 04:57:50.577450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87294 ] 00:11:34.182 [2024-11-21 04:57:50.748633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.182 [2024-11-21 04:57:50.773459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.182 [2024-11-21 04:57:50.814916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.182 [2024-11-21 04:57:50.814954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 BaseBdev1_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 [2024-11-21 04:57:51.428569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:34.751 [2024-11-21 04:57:51.428638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.751 [2024-11-21 04:57:51.428671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:34.751 [2024-11-21 04:57:51.428683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.751 [2024-11-21 04:57:51.430783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.751 [2024-11-21 04:57:51.430859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.751 BaseBdev1 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 BaseBdev2_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 [2024-11-21 04:57:51.456956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:34.751 [2024-11-21 04:57:51.457053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.751 [2024-11-21 04:57:51.457075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:34.751 [2024-11-21 04:57:51.457084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.751 [2024-11-21 04:57:51.459102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.751 [2024-11-21 04:57:51.459135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:34.751 BaseBdev2 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 spare_malloc 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.751 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.010 spare_delay 00:11:35.010 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.010 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.010 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.010 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.010 [2024-11-21 04:57:51.497362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.010 [2024-11-21 04:57:51.497411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.010 [2024-11-21 04:57:51.497432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:35.011 [2024-11-21 04:57:51.497440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.011 [2024-11-21 04:57:51.499510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.011 [2024-11-21 04:57:51.499595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.011 spare 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.011 [2024-11-21 04:57:51.509380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.011 [2024-11-21 04:57:51.511082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.011 [2024-11-21 04:57:51.511248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:35.011 [2024-11-21 04:57:51.511280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.011 [2024-11-21 04:57:51.511579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:35.011 [2024-11-21 04:57:51.511736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:35.011 [2024-11-21 04:57:51.511781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:35.011 [2024-11-21 04:57:51.511994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.011 "name": "raid_bdev1", 00:11:35.011 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:35.011 "strip_size_kb": 0, 00:11:35.011 "state": "online", 00:11:35.011 "raid_level": "raid1", 00:11:35.011 "superblock": false, 00:11:35.011 "num_base_bdevs": 2, 00:11:35.011 "num_base_bdevs_discovered": 2, 00:11:35.011 "num_base_bdevs_operational": 2, 00:11:35.011 "base_bdevs_list": [ 00:11:35.011 { 00:11:35.011 "name": "BaseBdev1", 00:11:35.011 "uuid": "b52c5917-eb08-5bc7-8683-c6489ab148f7", 00:11:35.011 "is_configured": true, 00:11:35.011 "data_offset": 0, 00:11:35.011 "data_size": 65536 00:11:35.011 }, 00:11:35.011 { 00:11:35.011 "name": "BaseBdev2", 00:11:35.011 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:35.011 "is_configured": true, 00:11:35.011 "data_offset": 0, 00:11:35.011 "data_size": 65536 00:11:35.011 } 00:11:35.011 ] 00:11:35.011 }' 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.011 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 [2024-11-21 04:57:51.940901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:35.270 04:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.529 [2024-11-21 04:57:52.040436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.529 "name": "raid_bdev1", 00:11:35.529 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:35.529 "strip_size_kb": 0, 00:11:35.529 "state": "online", 00:11:35.529 "raid_level": "raid1", 00:11:35.529 "superblock": false, 00:11:35.529 "num_base_bdevs": 2, 00:11:35.529 "num_base_bdevs_discovered": 1, 00:11:35.529 "num_base_bdevs_operational": 1, 00:11:35.529 "base_bdevs_list": [ 00:11:35.529 { 00:11:35.529 "name": null, 00:11:35.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.529 "is_configured": false, 00:11:35.529 "data_offset": 0, 00:11:35.529 "data_size": 65536 00:11:35.529 }, 00:11:35.529 { 00:11:35.529 "name": "BaseBdev2", 00:11:35.529 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:35.529 "is_configured": true, 00:11:35.529 "data_offset": 0, 00:11:35.529 "data_size": 65536 00:11:35.529 } 00:11:35.529 ] 00:11:35.529 }' 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.529 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.529 [2024-11-21 04:57:52.126248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:35.529 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:35.529 Zero copy mechanism will not be used. 00:11:35.529 Running I/O for 60 seconds... 00:11:35.787 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:35.787 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.787 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 [2024-11-21 04:57:52.408162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:35.787 04:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.787 04:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:35.787 [2024-11-21 04:57:52.469219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:35.787 [2024-11-21 04:57:52.471139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:36.045 [2024-11-21 04:57:52.572280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:36.045 [2024-11-21 04:57:52.572649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:36.302 [2024-11-21 04:57:52.806175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:36.560 179.00 IOPS, 537.00 MiB/s [2024-11-21T04:57:53.295Z] [2024-11-21 04:57:53.164098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:36.819 [2024-11-21 04:57:53.382506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.819 "name": "raid_bdev1", 00:11:36.819 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:36.819 "strip_size_kb": 0, 00:11:36.819 "state": "online", 00:11:36.819 "raid_level": "raid1", 00:11:36.819 "superblock": false, 00:11:36.819 "num_base_bdevs": 2, 00:11:36.819 "num_base_bdevs_discovered": 2, 00:11:36.819 "num_base_bdevs_operational": 2, 00:11:36.819 "process": { 00:11:36.819 "type": "rebuild", 00:11:36.819 "target": "spare", 00:11:36.819 "progress": { 00:11:36.819 "blocks": 14336, 00:11:36.819 "percent": 21 00:11:36.819 } 00:11:36.819 }, 00:11:36.819 "base_bdevs_list": [ 00:11:36.819 { 00:11:36.819 "name": "spare", 00:11:36.819 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:36.819 "is_configured": true, 00:11:36.819 "data_offset": 0, 00:11:36.819 "data_size": 65536 00:11:36.819 }, 00:11:36.819 { 00:11:36.819 "name": "BaseBdev2", 00:11:36.819 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:36.819 "is_configured": true, 00:11:36.819 "data_offset": 0, 00:11:36.819 "data_size": 65536 00:11:36.819 } 00:11:36.819 ] 00:11:36.819 }' 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.819 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.078 [2024-11-21 04:57:53.595736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.078 [2024-11-21 04:57:53.723671] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.078 [2024-11-21 04:57:53.736371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.078 [2024-11-21 04:57:53.736407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.078 [2024-11-21 04:57:53.736421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.078 [2024-11-21 04:57:53.748619] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.078 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.337 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.337 "name": "raid_bdev1", 00:11:37.337 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:37.337 "strip_size_kb": 0, 00:11:37.337 "state": "online", 00:11:37.337 "raid_level": "raid1", 00:11:37.337 "superblock": false, 00:11:37.337 "num_base_bdevs": 2, 00:11:37.337 "num_base_bdevs_discovered": 1, 00:11:37.337 "num_base_bdevs_operational": 1, 00:11:37.337 "base_bdevs_list": [ 00:11:37.337 { 00:11:37.337 "name": null, 00:11:37.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.337 "is_configured": false, 00:11:37.337 "data_offset": 0, 00:11:37.337 "data_size": 65536 00:11:37.337 }, 00:11:37.337 { 00:11:37.337 "name": "BaseBdev2", 00:11:37.337 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:37.337 "is_configured": true, 00:11:37.337 "data_offset": 0, 00:11:37.337 "data_size": 65536 00:11:37.337 } 00:11:37.337 ] 00:11:37.337 }' 00:11:37.337 04:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.337 04:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.597 174.00 IOPS, 522.00 MiB/s [2024-11-21T04:57:54.332Z] 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.597 "name": "raid_bdev1", 00:11:37.597 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:37.597 "strip_size_kb": 0, 00:11:37.597 "state": "online", 00:11:37.597 "raid_level": "raid1", 00:11:37.597 "superblock": false, 00:11:37.597 "num_base_bdevs": 2, 00:11:37.597 "num_base_bdevs_discovered": 1, 00:11:37.597 "num_base_bdevs_operational": 1, 00:11:37.597 "base_bdevs_list": [ 00:11:37.597 { 00:11:37.597 "name": null, 00:11:37.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.597 "is_configured": false, 00:11:37.597 "data_offset": 0, 00:11:37.597 "data_size": 65536 00:11:37.597 }, 00:11:37.597 { 00:11:37.597 "name": "BaseBdev2", 00:11:37.597 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:37.597 "is_configured": true, 00:11:37.597 "data_offset": 0, 00:11:37.597 "data_size": 65536 00:11:37.597 } 00:11:37.597 ] 00:11:37.597 }' 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:37.597 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.857 [2024-11-21 04:57:54.348887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.857 04:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:37.857 [2024-11-21 04:57:54.391841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:37.857 [2024-11-21 04:57:54.393789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.857 [2024-11-21 04:57:54.506596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:37.857 [2024-11-21 04:57:54.507119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:38.117 [2024-11-21 04:57:54.726830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.117 [2024-11-21 04:57:54.727050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.689 164.33 IOPS, 493.00 MiB/s [2024-11-21T04:57:55.424Z] [2024-11-21 04:57:55.150166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:38.689 [2024-11-21 04:57:55.150370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.690 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.690 "name": "raid_bdev1", 00:11:38.690 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:38.690 "strip_size_kb": 0, 00:11:38.690 "state": "online", 00:11:38.690 "raid_level": "raid1", 00:11:38.690 "superblock": false, 00:11:38.690 "num_base_bdevs": 2, 00:11:38.690 "num_base_bdevs_discovered": 2, 00:11:38.690 "num_base_bdevs_operational": 2, 00:11:38.690 "process": { 00:11:38.690 "type": "rebuild", 00:11:38.690 "target": "spare", 00:11:38.690 "progress": { 00:11:38.690 "blocks": 12288, 00:11:38.690 "percent": 18 00:11:38.690 } 00:11:38.690 }, 00:11:38.690 "base_bdevs_list": [ 00:11:38.690 { 00:11:38.690 "name": "spare", 00:11:38.690 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:38.690 "is_configured": true, 00:11:38.690 "data_offset": 0, 00:11:38.690 "data_size": 65536 00:11:38.690 }, 00:11:38.690 { 00:11:38.690 "name": "BaseBdev2", 00:11:38.690 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:38.690 "is_configured": true, 00:11:38.690 "data_offset": 0, 00:11:38.690 "data_size": 65536 00:11:38.690 } 00:11:38.690 ] 00:11:38.690 }' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.950 [2024-11-21 04:57:55.489382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=327 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.950 "name": "raid_bdev1", 00:11:38.950 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:38.950 "strip_size_kb": 0, 00:11:38.950 "state": "online", 00:11:38.950 "raid_level": "raid1", 00:11:38.950 "superblock": false, 00:11:38.950 "num_base_bdevs": 2, 00:11:38.950 "num_base_bdevs_discovered": 2, 00:11:38.950 "num_base_bdevs_operational": 2, 00:11:38.950 "process": { 00:11:38.950 "type": "rebuild", 00:11:38.950 "target": "spare", 00:11:38.950 "progress": { 00:11:38.950 "blocks": 16384, 00:11:38.950 "percent": 25 00:11:38.950 } 00:11:38.950 }, 00:11:38.950 "base_bdevs_list": [ 00:11:38.950 { 00:11:38.950 "name": "spare", 00:11:38.950 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:38.950 "is_configured": true, 00:11:38.950 "data_offset": 0, 00:11:38.950 "data_size": 65536 00:11:38.950 }, 00:11:38.950 { 00:11:38.950 "name": "BaseBdev2", 00:11:38.950 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:38.950 "is_configured": true, 00:11:38.950 "data_offset": 0, 00:11:38.950 "data_size": 65536 00:11:38.950 } 00:11:38.950 ] 00:11:38.950 }' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.950 04:57:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:39.210 [2024-11-21 04:57:55.817790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:39.470 142.00 IOPS, 426.00 MiB/s [2024-11-21T04:57:56.205Z] [2024-11-21 04:57:56.145392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:39.470 [2024-11-21 04:57:56.145882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.038 "name": "raid_bdev1", 00:11:40.038 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:40.038 "strip_size_kb": 0, 00:11:40.038 "state": "online", 00:11:40.038 "raid_level": "raid1", 00:11:40.038 "superblock": false, 00:11:40.038 "num_base_bdevs": 2, 00:11:40.038 "num_base_bdevs_discovered": 2, 00:11:40.038 "num_base_bdevs_operational": 2, 00:11:40.038 "process": { 00:11:40.038 "type": "rebuild", 00:11:40.038 "target": "spare", 00:11:40.038 "progress": { 00:11:40.038 "blocks": 32768, 00:11:40.038 "percent": 50 00:11:40.038 } 00:11:40.038 }, 00:11:40.038 "base_bdevs_list": [ 00:11:40.038 { 00:11:40.038 "name": "spare", 00:11:40.038 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:40.038 "is_configured": true, 00:11:40.038 "data_offset": 0, 00:11:40.038 "data_size": 65536 00:11:40.038 }, 00:11:40.038 { 00:11:40.038 "name": "BaseBdev2", 00:11:40.038 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:40.038 "is_configured": true, 00:11:40.038 "data_offset": 0, 00:11:40.038 "data_size": 65536 00:11:40.038 } 00:11:40.038 ] 00:11:40.038 }' 00:11:40.038 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.298 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.298 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.298 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.298 04:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.298 [2024-11-21 04:57:56.935139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:40.557 [2024-11-21 04:57:57.048635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:40.816 125.80 IOPS, 377.40 MiB/s [2024-11-21T04:57:57.551Z] [2024-11-21 04:57:57.382453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:40.816 [2024-11-21 04:57:57.382691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.384 "name": "raid_bdev1", 00:11:41.384 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:41.384 "strip_size_kb": 0, 00:11:41.384 "state": "online", 00:11:41.384 "raid_level": "raid1", 00:11:41.384 "superblock": false, 00:11:41.384 "num_base_bdevs": 2, 00:11:41.384 "num_base_bdevs_discovered": 2, 00:11:41.384 "num_base_bdevs_operational": 2, 00:11:41.384 "process": { 00:11:41.384 "type": "rebuild", 00:11:41.384 "target": "spare", 00:11:41.384 "progress": { 00:11:41.384 "blocks": 53248, 00:11:41.384 "percent": 81 00:11:41.384 } 00:11:41.384 }, 00:11:41.384 "base_bdevs_list": [ 00:11:41.384 { 00:11:41.384 "name": "spare", 00:11:41.384 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:41.384 "is_configured": true, 00:11:41.384 "data_offset": 0, 00:11:41.384 "data_size": 65536 00:11:41.384 }, 00:11:41.384 { 00:11:41.384 "name": "BaseBdev2", 00:11:41.384 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:41.384 "is_configured": true, 00:11:41.384 "data_offset": 0, 00:11:41.384 "data_size": 65536 00:11:41.384 } 00:11:41.384 ] 00:11:41.384 }' 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.384 04:57:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.384 [2024-11-21 04:57:58.027517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:41.643 110.17 IOPS, 330.50 MiB/s [2024-11-21T04:57:58.378Z] [2024-11-21 04:57:58.248169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:41.903 [2024-11-21 04:57:58.576150] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:42.162 [2024-11-21 04:57:58.675956] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:42.162 [2024-11-21 04:57:58.677864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.421 04:57:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.421 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.421 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.421 "name": "raid_bdev1", 00:11:42.421 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:42.421 "strip_size_kb": 0, 00:11:42.421 "state": "online", 00:11:42.421 "raid_level": "raid1", 00:11:42.421 "superblock": false, 00:11:42.421 "num_base_bdevs": 2, 00:11:42.421 "num_base_bdevs_discovered": 2, 00:11:42.421 "num_base_bdevs_operational": 2, 00:11:42.421 "base_bdevs_list": [ 00:11:42.421 { 00:11:42.421 "name": "spare", 00:11:42.421 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:42.421 "is_configured": true, 00:11:42.421 "data_offset": 0, 00:11:42.421 "data_size": 65536 00:11:42.421 }, 00:11:42.421 { 00:11:42.421 "name": "BaseBdev2", 00:11:42.421 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:42.421 "is_configured": true, 00:11:42.421 "data_offset": 0, 00:11:42.421 "data_size": 65536 00:11:42.421 } 00:11:42.421 ] 00:11:42.421 }' 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.422 98.43 IOPS, 295.29 MiB/s [2024-11-21T04:57:59.157Z] 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.422 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.704 "name": "raid_bdev1", 00:11:42.704 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:42.704 "strip_size_kb": 0, 00:11:42.704 "state": "online", 00:11:42.704 "raid_level": "raid1", 00:11:42.704 "superblock": false, 00:11:42.704 "num_base_bdevs": 2, 00:11:42.704 "num_base_bdevs_discovered": 2, 00:11:42.704 "num_base_bdevs_operational": 2, 00:11:42.704 "base_bdevs_list": [ 00:11:42.704 { 00:11:42.704 "name": "spare", 00:11:42.704 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:42.704 "is_configured": true, 00:11:42.704 "data_offset": 0, 00:11:42.704 "data_size": 65536 00:11:42.704 }, 00:11:42.704 { 00:11:42.704 "name": "BaseBdev2", 00:11:42.704 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:42.704 "is_configured": true, 00:11:42.704 "data_offset": 0, 00:11:42.704 "data_size": 65536 00:11:42.704 } 00:11:42.704 ] 00:11:42.704 }' 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.704 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.704 "name": "raid_bdev1", 00:11:42.704 "uuid": "d3153fe9-62c4-4c6b-aad9-2f28f12f1a6f", 00:11:42.704 "strip_size_kb": 0, 00:11:42.704 "state": "online", 00:11:42.705 "raid_level": "raid1", 00:11:42.705 "superblock": false, 00:11:42.705 "num_base_bdevs": 2, 00:11:42.705 "num_base_bdevs_discovered": 2, 00:11:42.705 "num_base_bdevs_operational": 2, 00:11:42.705 "base_bdevs_list": [ 00:11:42.705 { 00:11:42.705 "name": "spare", 00:11:42.705 "uuid": "de2fa1e5-f36f-5b45-a43f-3fd954b64fb5", 00:11:42.705 "is_configured": true, 00:11:42.705 "data_offset": 0, 00:11:42.705 "data_size": 65536 00:11:42.705 }, 00:11:42.705 { 00:11:42.705 "name": "BaseBdev2", 00:11:42.705 "uuid": "7da0cb0d-fdc3-5ce1-983a-08bd7194a460", 00:11:42.705 "is_configured": true, 00:11:42.705 "data_offset": 0, 00:11:42.705 "data_size": 65536 00:11:42.705 } 00:11:42.705 ] 00:11:42.705 }' 00:11:42.705 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.705 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.964 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.964 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.964 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.964 [2024-11-21 04:57:59.679977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.964 [2024-11-21 04:57:59.680006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.222 00:11:43.222 Latency(us) 00:11:43.222 [2024-11-21T04:57:59.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.222 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:43.222 raid_bdev1 : 7.63 92.00 276.01 0.00 0.00 15756.79 277.24 113099.68 00:11:43.222 [2024-11-21T04:57:59.957Z] =================================================================================================================== 00:11:43.222 [2024-11-21T04:57:59.957Z] Total : 92.00 276.01 0.00 0.00 15756.79 277.24 113099.68 00:11:43.222 [2024-11-21 04:57:59.747731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.222 { 00:11:43.222 "results": [ 00:11:43.222 { 00:11:43.222 "job": "raid_bdev1", 00:11:43.222 "core_mask": "0x1", 00:11:43.222 "workload": "randrw", 00:11:43.222 "percentage": 50, 00:11:43.222 "status": "finished", 00:11:43.222 "queue_depth": 2, 00:11:43.222 "io_size": 3145728, 00:11:43.222 "runtime": 7.630048, 00:11:43.222 "iops": 92.00466366659816, 00:11:43.222 "mibps": 276.0139909997945, 00:11:43.222 "io_failed": 0, 00:11:43.222 "io_timeout": 0, 00:11:43.222 "avg_latency_us": 15756.78519016161, 00:11:43.222 "min_latency_us": 277.2401746724891, 00:11:43.222 "max_latency_us": 113099.68209606987 00:11:43.222 } 00:11:43.222 ], 00:11:43.222 "core_count": 1 00:11:43.222 } 00:11:43.222 [2024-11-21 04:57:59.747837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.222 [2024-11-21 04:57:59.747932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.223 [2024-11-21 04:57:59.747944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.223 04:57:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:43.482 /dev/nbd0 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.482 1+0 records in 00:11:43.482 1+0 records out 00:11:43.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382087 s, 10.7 MB/s 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.482 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.483 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:43.743 /dev/nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.743 1+0 records in 00:11:43.743 1+0 records out 00:11:43.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313783 s, 13.1 MB/s 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.743 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.001 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87294 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 87294 ']' 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 87294 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87294 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87294' 00:11:44.259 killing process with pid 87294 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 87294 00:11:44.259 Received shutdown signal, test time was about 8.675597 seconds 00:11:44.259 00:11:44.259 Latency(us) 00:11:44.259 [2024-11-21T04:58:00.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.259 [2024-11-21T04:58:00.994Z] =================================================================================================================== 00:11:44.259 [2024-11-21T04:58:00.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.259 [2024-11-21 04:58:00.787354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.259 04:58:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 87294 00:11:44.259 [2024-11-21 04:58:00.813815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:44.520 00:11:44.520 real 0m10.533s 00:11:44.520 user 0m13.577s 00:11:44.520 sys 0m1.368s 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.520 ************************************ 00:11:44.520 END TEST raid_rebuild_test_io 00:11:44.520 ************************************ 00:11:44.520 04:58:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:44.520 04:58:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:44.520 04:58:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.520 04:58:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.520 ************************************ 00:11:44.520 START TEST raid_rebuild_test_sb_io 00:11:44.520 ************************************ 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87652 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87652 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 87652 ']' 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.520 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.520 [2024-11-21 04:58:01.179958] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:11:44.520 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:44.520 Zero copy mechanism will not be used. 00:11:44.520 [2024-11-21 04:58:01.180178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87652 ] 00:11:44.780 [2024-11-21 04:58:01.349363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.780 [2024-11-21 04:58:01.375005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.780 [2024-11-21 04:58:01.417685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.780 [2024-11-21 04:58:01.417715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.349 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.349 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:45.349 04:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 BaseBdev1_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 [2024-11-21 04:58:02.024037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:45.349 [2024-11-21 04:58:02.024129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.349 [2024-11-21 04:58:02.024164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.349 [2024-11-21 04:58:02.024179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.349 [2024-11-21 04:58:02.026304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.349 [2024-11-21 04:58:02.026374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.349 BaseBdev1 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 BaseBdev2_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 [2024-11-21 04:58:02.052664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:45.349 [2024-11-21 04:58:02.052717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.349 [2024-11-21 04:58:02.052736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.349 [2024-11-21 04:58:02.052744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.349 [2024-11-21 04:58:02.054797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.349 [2024-11-21 04:58:02.054830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.349 BaseBdev2 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 spare_malloc 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 spare_delay 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 [2024-11-21 04:58:02.093241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:45.609 [2024-11-21 04:58:02.093292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.609 [2024-11-21 04:58:02.093312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:45.609 [2024-11-21 04:58:02.093320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.609 [2024-11-21 04:58:02.095495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.609 [2024-11-21 04:58:02.095575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:45.609 spare 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 [2024-11-21 04:58:02.105256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.609 [2024-11-21 04:58:02.107048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.609 [2024-11-21 04:58:02.107207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:45.609 [2024-11-21 04:58:02.107248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.609 [2024-11-21 04:58:02.107506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:45.609 [2024-11-21 04:58:02.107649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:45.609 [2024-11-21 04:58:02.107661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:45.609 [2024-11-21 04:58:02.107765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.609 "name": "raid_bdev1", 00:11:45.609 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:45.609 "strip_size_kb": 0, 00:11:45.609 "state": "online", 00:11:45.609 "raid_level": "raid1", 00:11:45.609 "superblock": true, 00:11:45.609 "num_base_bdevs": 2, 00:11:45.609 "num_base_bdevs_discovered": 2, 00:11:45.609 "num_base_bdevs_operational": 2, 00:11:45.609 "base_bdevs_list": [ 00:11:45.609 { 00:11:45.609 "name": "BaseBdev1", 00:11:45.609 "uuid": "248cef7e-85d4-5a90-aa29-8d8c530f189d", 00:11:45.609 "is_configured": true, 00:11:45.609 "data_offset": 2048, 00:11:45.609 "data_size": 63488 00:11:45.609 }, 00:11:45.609 { 00:11:45.609 "name": "BaseBdev2", 00:11:45.609 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:45.609 "is_configured": true, 00:11:45.609 "data_offset": 2048, 00:11:45.609 "data_size": 63488 00:11:45.609 } 00:11:45.609 ] 00:11:45.609 }' 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.609 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.869 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.869 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:45.869 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.869 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.869 [2024-11-21 04:58:02.564796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.869 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.129 [2024-11-21 04:58:02.636357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.129 "name": "raid_bdev1", 00:11:46.129 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:46.129 "strip_size_kb": 0, 00:11:46.129 "state": "online", 00:11:46.129 "raid_level": "raid1", 00:11:46.129 "superblock": true, 00:11:46.129 "num_base_bdevs": 2, 00:11:46.129 "num_base_bdevs_discovered": 1, 00:11:46.129 "num_base_bdevs_operational": 1, 00:11:46.129 "base_bdevs_list": [ 00:11:46.129 { 00:11:46.129 "name": null, 00:11:46.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.129 "is_configured": false, 00:11:46.129 "data_offset": 0, 00:11:46.129 "data_size": 63488 00:11:46.129 }, 00:11:46.129 { 00:11:46.129 "name": "BaseBdev2", 00:11:46.129 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:46.129 "is_configured": true, 00:11:46.129 "data_offset": 2048, 00:11:46.129 "data_size": 63488 00:11:46.129 } 00:11:46.129 ] 00:11:46.129 }' 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.129 04:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.129 [2024-11-21 04:58:02.710175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:46.129 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:46.129 Zero copy mechanism will not be used. 00:11:46.129 Running I/O for 60 seconds... 00:11:46.388 04:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:46.388 04:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.388 04:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.388 [2024-11-21 04:58:03.042693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:46.388 04:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.388 04:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:46.388 [2024-11-21 04:58:03.088821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.388 [2024-11-21 04:58:03.090727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.648 [2024-11-21 04:58:03.215903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:46.648 [2024-11-21 04:58:03.216532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:46.907 [2024-11-21 04:58:03.417838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:46.907 [2024-11-21 04:58:03.418201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:47.167 168.00 IOPS, 504.00 MiB/s [2024-11-21T04:58:03.902Z] [2024-11-21 04:58:03.871303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.426 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.426 "name": "raid_bdev1", 00:11:47.426 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:47.426 "strip_size_kb": 0, 00:11:47.426 "state": "online", 00:11:47.426 "raid_level": "raid1", 00:11:47.426 "superblock": true, 00:11:47.426 "num_base_bdevs": 2, 00:11:47.426 "num_base_bdevs_discovered": 2, 00:11:47.426 "num_base_bdevs_operational": 2, 00:11:47.426 "process": { 00:11:47.426 "type": "rebuild", 00:11:47.426 "target": "spare", 00:11:47.426 "progress": { 00:11:47.426 "blocks": 10240, 00:11:47.426 "percent": 16 00:11:47.426 } 00:11:47.426 }, 00:11:47.426 "base_bdevs_list": [ 00:11:47.426 { 00:11:47.426 "name": "spare", 00:11:47.426 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:47.426 "is_configured": true, 00:11:47.426 "data_offset": 2048, 00:11:47.427 "data_size": 63488 00:11:47.427 }, 00:11:47.427 { 00:11:47.427 "name": "BaseBdev2", 00:11:47.427 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:47.427 "is_configured": true, 00:11:47.427 "data_offset": 2048, 00:11:47.427 "data_size": 63488 00:11:47.427 } 00:11:47.427 ] 00:11:47.427 }' 00:11:47.427 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.685 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.686 [2024-11-21 04:58:04.209866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.686 [2024-11-21 04:58:04.304791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:47.686 [2024-11-21 04:58:04.330666] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:47.686 [2024-11-21 04:58:04.337984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.686 [2024-11-21 04:58:04.338025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.686 [2024-11-21 04:58:04.338036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:47.686 [2024-11-21 04:58:04.355172] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.686 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.945 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.945 "name": "raid_bdev1", 00:11:47.945 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:47.945 "strip_size_kb": 0, 00:11:47.945 "state": "online", 00:11:47.945 "raid_level": "raid1", 00:11:47.945 "superblock": true, 00:11:47.945 "num_base_bdevs": 2, 00:11:47.945 "num_base_bdevs_discovered": 1, 00:11:47.945 "num_base_bdevs_operational": 1, 00:11:47.945 "base_bdevs_list": [ 00:11:47.945 { 00:11:47.945 "name": null, 00:11:47.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.945 "is_configured": false, 00:11:47.945 "data_offset": 0, 00:11:47.945 "data_size": 63488 00:11:47.945 }, 00:11:47.945 { 00:11:47.945 "name": "BaseBdev2", 00:11:47.945 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:47.945 "is_configured": true, 00:11:47.945 "data_offset": 2048, 00:11:47.945 "data_size": 63488 00:11:47.945 } 00:11:47.945 ] 00:11:47.945 }' 00:11:47.945 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.945 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.204 166.00 IOPS, 498.00 MiB/s [2024-11-21T04:58:04.939Z] 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.204 "name": "raid_bdev1", 00:11:48.204 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:48.204 "strip_size_kb": 0, 00:11:48.204 "state": "online", 00:11:48.204 "raid_level": "raid1", 00:11:48.204 "superblock": true, 00:11:48.204 "num_base_bdevs": 2, 00:11:48.204 "num_base_bdevs_discovered": 1, 00:11:48.204 "num_base_bdevs_operational": 1, 00:11:48.204 "base_bdevs_list": [ 00:11:48.204 { 00:11:48.204 "name": null, 00:11:48.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.204 "is_configured": false, 00:11:48.204 "data_offset": 0, 00:11:48.204 "data_size": 63488 00:11:48.204 }, 00:11:48.204 { 00:11:48.204 "name": "BaseBdev2", 00:11:48.204 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:48.204 "is_configured": true, 00:11:48.204 "data_offset": 2048, 00:11:48.204 "data_size": 63488 00:11:48.204 } 00:11:48.204 ] 00:11:48.204 }' 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.204 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.472 [2024-11-21 04:58:04.941373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.472 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.472 04:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:48.472 [2024-11-21 04:58:04.972716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:48.472 [2024-11-21 04:58:04.974637] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.472 [2024-11-21 04:58:05.086581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:48.472 [2024-11-21 04:58:05.087077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:48.758 [2024-11-21 04:58:05.300516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:48.758 [2024-11-21 04:58:05.300746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:49.018 [2024-11-21 04:58:05.645602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:49.018 [2024-11-21 04:58:05.651241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:49.278 152.33 IOPS, 457.00 MiB/s [2024-11-21T04:58:06.013Z] 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.278 [2024-11-21 04:58:05.995606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.278 [2024-11-21 04:58:05.996027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.278 04:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.538 "name": "raid_bdev1", 00:11:49.538 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:49.538 "strip_size_kb": 0, 00:11:49.538 "state": "online", 00:11:49.538 "raid_level": "raid1", 00:11:49.538 "superblock": true, 00:11:49.538 "num_base_bdevs": 2, 00:11:49.538 "num_base_bdevs_discovered": 2, 00:11:49.538 "num_base_bdevs_operational": 2, 00:11:49.538 "process": { 00:11:49.538 "type": "rebuild", 00:11:49.538 "target": "spare", 00:11:49.538 "progress": { 00:11:49.538 "blocks": 12288, 00:11:49.538 "percent": 19 00:11:49.538 } 00:11:49.538 }, 00:11:49.538 "base_bdevs_list": [ 00:11:49.538 { 00:11:49.538 "name": "spare", 00:11:49.538 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:49.538 "is_configured": true, 00:11:49.538 "data_offset": 2048, 00:11:49.538 "data_size": 63488 00:11:49.538 }, 00:11:49.538 { 00:11:49.538 "name": "BaseBdev2", 00:11:49.538 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:49.538 "is_configured": true, 00:11:49.538 "data_offset": 2048, 00:11:49.538 "data_size": 63488 00:11:49.538 } 00:11:49.538 ] 00:11:49.538 }' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:49.538 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=338 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.538 "name": "raid_bdev1", 00:11:49.538 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:49.538 "strip_size_kb": 0, 00:11:49.538 "state": "online", 00:11:49.538 "raid_level": "raid1", 00:11:49.538 "superblock": true, 00:11:49.538 "num_base_bdevs": 2, 00:11:49.538 "num_base_bdevs_discovered": 2, 00:11:49.538 "num_base_bdevs_operational": 2, 00:11:49.538 "process": { 00:11:49.538 "type": "rebuild", 00:11:49.538 "target": "spare", 00:11:49.538 "progress": { 00:11:49.538 "blocks": 14336, 00:11:49.538 "percent": 22 00:11:49.538 } 00:11:49.538 }, 00:11:49.538 "base_bdevs_list": [ 00:11:49.538 { 00:11:49.538 "name": "spare", 00:11:49.538 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:49.538 "is_configured": true, 00:11:49.538 "data_offset": 2048, 00:11:49.538 "data_size": 63488 00:11:49.538 }, 00:11:49.538 { 00:11:49.538 "name": "BaseBdev2", 00:11:49.538 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:49.538 "is_configured": true, 00:11:49.538 "data_offset": 2048, 00:11:49.538 "data_size": 63488 00:11:49.538 } 00:11:49.538 ] 00:11:49.538 }' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.538 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.539 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.539 [2024-11-21 04:58:06.229171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:49.539 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.539 04:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:49.798 [2024-11-21 04:58:06.450608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:49.798 [2024-11-21 04:58:06.450937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:50.057 [2024-11-21 04:58:06.576767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:50.316 136.50 IOPS, 409.50 MiB/s [2024-11-21T04:58:07.051Z] [2024-11-21 04:58:06.814139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:50.316 [2024-11-21 04:58:06.814543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:50.316 [2024-11-21 04:58:07.034196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.576 "name": "raid_bdev1", 00:11:50.576 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:50.576 "strip_size_kb": 0, 00:11:50.576 "state": "online", 00:11:50.576 "raid_level": "raid1", 00:11:50.576 "superblock": true, 00:11:50.576 "num_base_bdevs": 2, 00:11:50.576 "num_base_bdevs_discovered": 2, 00:11:50.576 "num_base_bdevs_operational": 2, 00:11:50.576 "process": { 00:11:50.576 "type": "rebuild", 00:11:50.576 "target": "spare", 00:11:50.576 "progress": { 00:11:50.576 "blocks": 28672, 00:11:50.576 "percent": 45 00:11:50.576 } 00:11:50.576 }, 00:11:50.576 "base_bdevs_list": [ 00:11:50.576 { 00:11:50.576 "name": "spare", 00:11:50.576 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:50.576 "is_configured": true, 00:11:50.576 "data_offset": 2048, 00:11:50.576 "data_size": 63488 00:11:50.576 }, 00:11:50.576 { 00:11:50.576 "name": "BaseBdev2", 00:11:50.576 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:50.576 "is_configured": true, 00:11:50.576 "data_offset": 2048, 00:11:50.576 "data_size": 63488 00:11:50.576 } 00:11:50.576 ] 00:11:50.576 }' 00:11:50.576 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.835 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.835 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.835 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.835 04:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:50.835 [2024-11-21 04:58:07.473552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:50.835 [2024-11-21 04:58:07.473772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:51.662 116.40 IOPS, 349.20 MiB/s [2024-11-21T04:58:08.397Z] [2024-11-21 04:58:08.137251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:51.662 [2024-11-21 04:58:08.355728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:51.662 [2024-11-21 04:58:08.355929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.662 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.921 "name": "raid_bdev1", 00:11:51.921 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:51.921 "strip_size_kb": 0, 00:11:51.921 "state": "online", 00:11:51.921 "raid_level": "raid1", 00:11:51.921 "superblock": true, 00:11:51.921 "num_base_bdevs": 2, 00:11:51.921 "num_base_bdevs_discovered": 2, 00:11:51.921 "num_base_bdevs_operational": 2, 00:11:51.921 "process": { 00:11:51.921 "type": "rebuild", 00:11:51.921 "target": "spare", 00:11:51.921 "progress": { 00:11:51.921 "blocks": 47104, 00:11:51.921 "percent": 74 00:11:51.921 } 00:11:51.921 }, 00:11:51.921 "base_bdevs_list": [ 00:11:51.921 { 00:11:51.921 "name": "spare", 00:11:51.921 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:51.921 "is_configured": true, 00:11:51.921 "data_offset": 2048, 00:11:51.921 "data_size": 63488 00:11:51.921 }, 00:11:51.921 { 00:11:51.921 "name": "BaseBdev2", 00:11:51.921 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:51.921 "is_configured": true, 00:11:51.921 "data_offset": 2048, 00:11:51.921 "data_size": 63488 00:11:51.921 } 00:11:51.921 ] 00:11:51.921 }' 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.921 04:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.181 [2024-11-21 04:58:08.671963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:52.181 103.67 IOPS, 311.00 MiB/s [2024-11-21T04:58:08.916Z] [2024-11-21 04:58:08.774230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:52.439 [2024-11-21 04:58:09.109143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:53.006 [2024-11-21 04:58:09.436260] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.006 [2024-11-21 04:58:09.536076] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:53.006 [2024-11-21 04:58:09.537608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.006 "name": "raid_bdev1", 00:11:53.006 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:53.006 "strip_size_kb": 0, 00:11:53.006 "state": "online", 00:11:53.006 "raid_level": "raid1", 00:11:53.006 "superblock": true, 00:11:53.006 "num_base_bdevs": 2, 00:11:53.006 "num_base_bdevs_discovered": 2, 00:11:53.006 "num_base_bdevs_operational": 2, 00:11:53.006 "base_bdevs_list": [ 00:11:53.006 { 00:11:53.006 "name": "spare", 00:11:53.006 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:53.006 "is_configured": true, 00:11:53.006 "data_offset": 2048, 00:11:53.006 "data_size": 63488 00:11:53.006 }, 00:11:53.006 { 00:11:53.006 "name": "BaseBdev2", 00:11:53.006 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:53.006 "is_configured": true, 00:11:53.006 "data_offset": 2048, 00:11:53.006 "data_size": 63488 00:11:53.006 } 00:11:53.006 ] 00:11:53.006 }' 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.006 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.006 93.71 IOPS, 281.14 MiB/s [2024-11-21T04:58:09.741Z] 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.006 "name": "raid_bdev1", 00:11:53.006 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:53.006 "strip_size_kb": 0, 00:11:53.006 "state": "online", 00:11:53.006 "raid_level": "raid1", 00:11:53.006 "superblock": true, 00:11:53.006 "num_base_bdevs": 2, 00:11:53.006 "num_base_bdevs_discovered": 2, 00:11:53.006 "num_base_bdevs_operational": 2, 00:11:53.006 "base_bdevs_list": [ 00:11:53.006 { 00:11:53.006 "name": "spare", 00:11:53.006 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:53.006 "is_configured": true, 00:11:53.006 "data_offset": 2048, 00:11:53.006 "data_size": 63488 00:11:53.006 }, 00:11:53.006 { 00:11:53.006 "name": "BaseBdev2", 00:11:53.006 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:53.006 "is_configured": true, 00:11:53.006 "data_offset": 2048, 00:11:53.006 "data_size": 63488 00:11:53.006 } 00:11:53.007 ] 00:11:53.007 }' 00:11:53.007 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.266 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.267 "name": "raid_bdev1", 00:11:53.267 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:53.267 "strip_size_kb": 0, 00:11:53.267 "state": "online", 00:11:53.267 "raid_level": "raid1", 00:11:53.267 "superblock": true, 00:11:53.267 "num_base_bdevs": 2, 00:11:53.267 "num_base_bdevs_discovered": 2, 00:11:53.267 "num_base_bdevs_operational": 2, 00:11:53.267 "base_bdevs_list": [ 00:11:53.267 { 00:11:53.267 "name": "spare", 00:11:53.267 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:53.267 "is_configured": true, 00:11:53.267 "data_offset": 2048, 00:11:53.267 "data_size": 63488 00:11:53.267 }, 00:11:53.267 { 00:11:53.267 "name": "BaseBdev2", 00:11:53.267 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:53.267 "is_configured": true, 00:11:53.267 "data_offset": 2048, 00:11:53.267 "data_size": 63488 00:11:53.267 } 00:11:53.267 ] 00:11:53.267 }' 00:11:53.267 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.267 04:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.526 [2024-11-21 04:58:10.172584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.526 [2024-11-21 04:58:10.172660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.526 00:11:53.526 Latency(us) 00:11:53.526 [2024-11-21T04:58:10.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.526 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:53.526 raid_bdev1 : 7.49 89.18 267.53 0.00 0.00 15314.32 270.09 114015.47 00:11:53.526 [2024-11-21T04:58:10.261Z] =================================================================================================================== 00:11:53.526 [2024-11-21T04:58:10.261Z] Total : 89.18 267.53 0.00 0.00 15314.32 270.09 114015.47 00:11:53.526 [2024-11-21 04:58:10.192027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.526 [2024-11-21 04:58:10.192066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.526 [2024-11-21 04:58:10.192157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.526 [2024-11-21 04:58:10.192167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:53.526 { 00:11:53.526 "results": [ 00:11:53.526 { 00:11:53.526 "job": "raid_bdev1", 00:11:53.526 "core_mask": "0x1", 00:11:53.526 "workload": "randrw", 00:11:53.526 "percentage": 50, 00:11:53.526 "status": "finished", 00:11:53.526 "queue_depth": 2, 00:11:53.526 "io_size": 3145728, 00:11:53.526 "runtime": 7.490628, 00:11:53.526 "iops": 89.1781036249564, 00:11:53.526 "mibps": 267.5343108748692, 00:11:53.526 "io_failed": 0, 00:11:53.526 "io_timeout": 0, 00:11:53.526 "avg_latency_us": 15314.32391810266, 00:11:53.526 "min_latency_us": 270.0855895196507, 00:11:53.526 "max_latency_us": 114015.46899563319 00:11:53.526 } 00:11:53.526 ], 00:11:53.526 "core_count": 1 00:11:53.526 } 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.526 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:53.785 /dev/nbd0 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.786 1+0 records in 00:11:53.786 1+0 records out 00:11:53.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522846 s, 7.8 MB/s 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.786 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:54.046 /dev/nbd1 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.046 1+0 records in 00:11:54.046 1+0 records out 00:11:54.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469988 s, 8.7 MB/s 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:54.046 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.307 04:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.566 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.567 [2024-11-21 04:58:11.200008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.567 [2024-11-21 04:58:11.200063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.567 [2024-11-21 04:58:11.200101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:54.567 [2024-11-21 04:58:11.200119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.567 [2024-11-21 04:58:11.202278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.567 [2024-11-21 04:58:11.202312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.567 [2024-11-21 04:58:11.202392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:54.567 [2024-11-21 04:58:11.202432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.567 [2024-11-21 04:58:11.202551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.567 spare 00:11:54.567 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.567 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:54.567 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.567 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 [2024-11-21 04:58:11.302441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:54.826 [2024-11-21 04:58:11.302516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.826 [2024-11-21 04:58:11.302851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:11:54.826 [2024-11-21 04:58:11.303016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:54.826 [2024-11-21 04:58:11.303028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:54.826 [2024-11-21 04:58:11.303193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.826 "name": "raid_bdev1", 00:11:54.826 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:54.826 "strip_size_kb": 0, 00:11:54.826 "state": "online", 00:11:54.826 "raid_level": "raid1", 00:11:54.826 "superblock": true, 00:11:54.826 "num_base_bdevs": 2, 00:11:54.826 "num_base_bdevs_discovered": 2, 00:11:54.826 "num_base_bdevs_operational": 2, 00:11:54.826 "base_bdevs_list": [ 00:11:54.826 { 00:11:54.826 "name": "spare", 00:11:54.826 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:54.826 "is_configured": true, 00:11:54.826 "data_offset": 2048, 00:11:54.826 "data_size": 63488 00:11:54.826 }, 00:11:54.826 { 00:11:54.826 "name": "BaseBdev2", 00:11:54.826 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:54.826 "is_configured": true, 00:11:54.826 "data_offset": 2048, 00:11:54.826 "data_size": 63488 00:11:54.826 } 00:11:54.826 ] 00:11:54.826 }' 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.826 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.086 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.086 "name": "raid_bdev1", 00:11:55.086 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:55.086 "strip_size_kb": 0, 00:11:55.086 "state": "online", 00:11:55.086 "raid_level": "raid1", 00:11:55.086 "superblock": true, 00:11:55.086 "num_base_bdevs": 2, 00:11:55.086 "num_base_bdevs_discovered": 2, 00:11:55.086 "num_base_bdevs_operational": 2, 00:11:55.086 "base_bdevs_list": [ 00:11:55.086 { 00:11:55.086 "name": "spare", 00:11:55.086 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:55.086 "is_configured": true, 00:11:55.086 "data_offset": 2048, 00:11:55.086 "data_size": 63488 00:11:55.086 }, 00:11:55.087 { 00:11:55.087 "name": "BaseBdev2", 00:11:55.087 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:55.087 "is_configured": true, 00:11:55.087 "data_offset": 2048, 00:11:55.087 "data_size": 63488 00:11:55.087 } 00:11:55.087 ] 00:11:55.087 }' 00:11:55.087 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 [2024-11-21 04:58:11.970919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 04:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.347 "name": "raid_bdev1", 00:11:55.347 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:55.347 "strip_size_kb": 0, 00:11:55.347 "state": "online", 00:11:55.347 "raid_level": "raid1", 00:11:55.347 "superblock": true, 00:11:55.347 "num_base_bdevs": 2, 00:11:55.347 "num_base_bdevs_discovered": 1, 00:11:55.347 "num_base_bdevs_operational": 1, 00:11:55.347 "base_bdevs_list": [ 00:11:55.347 { 00:11:55.347 "name": null, 00:11:55.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.347 "is_configured": false, 00:11:55.347 "data_offset": 0, 00:11:55.347 "data_size": 63488 00:11:55.347 }, 00:11:55.347 { 00:11:55.347 "name": "BaseBdev2", 00:11:55.347 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:55.347 "is_configured": true, 00:11:55.347 "data_offset": 2048, 00:11:55.347 "data_size": 63488 00:11:55.347 } 00:11:55.347 ] 00:11:55.347 }' 00:11:55.347 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.347 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.918 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.918 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.918 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.918 [2024-11-21 04:58:12.386317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.918 [2024-11-21 04:58:12.386571] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:55.918 [2024-11-21 04:58:12.386652] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:55.918 [2024-11-21 04:58:12.386723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.918 [2024-11-21 04:58:12.392081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:11:55.918 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.918 04:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:55.918 [2024-11-21 04:58:12.394132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.857 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.857 "name": "raid_bdev1", 00:11:56.857 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:56.857 "strip_size_kb": 0, 00:11:56.857 "state": "online", 00:11:56.857 "raid_level": "raid1", 00:11:56.857 "superblock": true, 00:11:56.857 "num_base_bdevs": 2, 00:11:56.857 "num_base_bdevs_discovered": 2, 00:11:56.857 "num_base_bdevs_operational": 2, 00:11:56.857 "process": { 00:11:56.857 "type": "rebuild", 00:11:56.858 "target": "spare", 00:11:56.858 "progress": { 00:11:56.858 "blocks": 20480, 00:11:56.858 "percent": 32 00:11:56.858 } 00:11:56.858 }, 00:11:56.858 "base_bdevs_list": [ 00:11:56.858 { 00:11:56.858 "name": "spare", 00:11:56.858 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:56.858 "is_configured": true, 00:11:56.858 "data_offset": 2048, 00:11:56.858 "data_size": 63488 00:11:56.858 }, 00:11:56.858 { 00:11:56.858 "name": "BaseBdev2", 00:11:56.858 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:56.858 "is_configured": true, 00:11:56.858 "data_offset": 2048, 00:11:56.858 "data_size": 63488 00:11:56.858 } 00:11:56.858 ] 00:11:56.858 }' 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.858 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.858 [2024-11-21 04:58:13.558837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.117 [2024-11-21 04:58:13.598691] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:57.117 [2024-11-21 04:58:13.598773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.117 [2024-11-21 04:58:13.598787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.117 [2024-11-21 04:58:13.598796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.117 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.117 "name": "raid_bdev1", 00:11:57.117 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:57.117 "strip_size_kb": 0, 00:11:57.117 "state": "online", 00:11:57.117 "raid_level": "raid1", 00:11:57.117 "superblock": true, 00:11:57.117 "num_base_bdevs": 2, 00:11:57.117 "num_base_bdevs_discovered": 1, 00:11:57.117 "num_base_bdevs_operational": 1, 00:11:57.117 "base_bdevs_list": [ 00:11:57.117 { 00:11:57.117 "name": null, 00:11:57.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.117 "is_configured": false, 00:11:57.117 "data_offset": 0, 00:11:57.118 "data_size": 63488 00:11:57.118 }, 00:11:57.118 { 00:11:57.118 "name": "BaseBdev2", 00:11:57.118 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:57.118 "is_configured": true, 00:11:57.118 "data_offset": 2048, 00:11:57.118 "data_size": 63488 00:11:57.118 } 00:11:57.118 ] 00:11:57.118 }' 00:11:57.118 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.118 04:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.377 04:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.377 04:58:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.377 04:58:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.377 [2024-11-21 04:58:14.083155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.377 [2024-11-21 04:58:14.083301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.377 [2024-11-21 04:58:14.083380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:57.377 [2024-11-21 04:58:14.083418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.377 [2024-11-21 04:58:14.083953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.377 [2024-11-21 04:58:14.084025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.377 [2024-11-21 04:58:14.084188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:57.377 [2024-11-21 04:58:14.084240] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:57.377 [2024-11-21 04:58:14.084313] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:57.377 [2024-11-21 04:58:14.084405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.377 [2024-11-21 04:58:14.089600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:11:57.377 spare 00:11:57.377 04:58:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.377 04:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:57.377 [2024-11-21 04:58:14.091576] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.758 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.759 "name": "raid_bdev1", 00:11:58.759 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:58.759 "strip_size_kb": 0, 00:11:58.759 "state": "online", 00:11:58.759 "raid_level": "raid1", 00:11:58.759 "superblock": true, 00:11:58.759 "num_base_bdevs": 2, 00:11:58.759 "num_base_bdevs_discovered": 2, 00:11:58.759 "num_base_bdevs_operational": 2, 00:11:58.759 "process": { 00:11:58.759 "type": "rebuild", 00:11:58.759 "target": "spare", 00:11:58.759 "progress": { 00:11:58.759 "blocks": 20480, 00:11:58.759 "percent": 32 00:11:58.759 } 00:11:58.759 }, 00:11:58.759 "base_bdevs_list": [ 00:11:58.759 { 00:11:58.759 "name": "spare", 00:11:58.759 "uuid": "86a21c40-3c3d-5752-b936-c2e70abf4f9f", 00:11:58.759 "is_configured": true, 00:11:58.759 "data_offset": 2048, 00:11:58.759 "data_size": 63488 00:11:58.759 }, 00:11:58.759 { 00:11:58.759 "name": "BaseBdev2", 00:11:58.759 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:58.759 "is_configured": true, 00:11:58.759 "data_offset": 2048, 00:11:58.759 "data_size": 63488 00:11:58.759 } 00:11:58.759 ] 00:11:58.759 }' 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.759 [2024-11-21 04:58:15.252127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.759 [2024-11-21 04:58:15.295971] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.759 [2024-11-21 04:58:15.296029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.759 [2024-11-21 04:58:15.296046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.759 [2024-11-21 04:58:15.296054] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.759 "name": "raid_bdev1", 00:11:58.759 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:58.759 "strip_size_kb": 0, 00:11:58.759 "state": "online", 00:11:58.759 "raid_level": "raid1", 00:11:58.759 "superblock": true, 00:11:58.759 "num_base_bdevs": 2, 00:11:58.759 "num_base_bdevs_discovered": 1, 00:11:58.759 "num_base_bdevs_operational": 1, 00:11:58.759 "base_bdevs_list": [ 00:11:58.759 { 00:11:58.759 "name": null, 00:11:58.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.759 "is_configured": false, 00:11:58.759 "data_offset": 0, 00:11:58.759 "data_size": 63488 00:11:58.759 }, 00:11:58.759 { 00:11:58.759 "name": "BaseBdev2", 00:11:58.759 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:58.759 "is_configured": true, 00:11:58.759 "data_offset": 2048, 00:11:58.759 "data_size": 63488 00:11:58.759 } 00:11:58.759 ] 00:11:58.759 }' 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.759 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.329 "name": "raid_bdev1", 00:11:59.329 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:11:59.329 "strip_size_kb": 0, 00:11:59.329 "state": "online", 00:11:59.329 "raid_level": "raid1", 00:11:59.329 "superblock": true, 00:11:59.329 "num_base_bdevs": 2, 00:11:59.329 "num_base_bdevs_discovered": 1, 00:11:59.329 "num_base_bdevs_operational": 1, 00:11:59.329 "base_bdevs_list": [ 00:11:59.329 { 00:11:59.329 "name": null, 00:11:59.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.329 "is_configured": false, 00:11:59.329 "data_offset": 0, 00:11:59.329 "data_size": 63488 00:11:59.329 }, 00:11:59.329 { 00:11:59.329 "name": "BaseBdev2", 00:11:59.329 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:11:59.329 "is_configured": true, 00:11:59.329 "data_offset": 2048, 00:11:59.329 "data_size": 63488 00:11:59.329 } 00:11:59.329 ] 00:11:59.329 }' 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.329 [2024-11-21 04:58:15.912078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:59.329 [2024-11-21 04:58:15.912148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.329 [2024-11-21 04:58:15.912187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:59.329 [2024-11-21 04:58:15.912196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.329 [2024-11-21 04:58:15.912634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.329 [2024-11-21 04:58:15.912656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.329 [2024-11-21 04:58:15.912738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:59.329 [2024-11-21 04:58:15.912757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.329 [2024-11-21 04:58:15.912768] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:59.329 [2024-11-21 04:58:15.912777] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:59.329 BaseBdev1 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.329 04:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.293 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.293 "name": "raid_bdev1", 00:12:00.293 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:12:00.293 "strip_size_kb": 0, 00:12:00.293 "state": "online", 00:12:00.293 "raid_level": "raid1", 00:12:00.293 "superblock": true, 00:12:00.293 "num_base_bdevs": 2, 00:12:00.293 "num_base_bdevs_discovered": 1, 00:12:00.293 "num_base_bdevs_operational": 1, 00:12:00.293 "base_bdevs_list": [ 00:12:00.293 { 00:12:00.293 "name": null, 00:12:00.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.293 "is_configured": false, 00:12:00.293 "data_offset": 0, 00:12:00.293 "data_size": 63488 00:12:00.293 }, 00:12:00.293 { 00:12:00.293 "name": "BaseBdev2", 00:12:00.293 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:12:00.293 "is_configured": true, 00:12:00.293 "data_offset": 2048, 00:12:00.293 "data_size": 63488 00:12:00.293 } 00:12:00.293 ] 00:12:00.293 }' 00:12:00.294 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.294 04:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.863 "name": "raid_bdev1", 00:12:00.863 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:12:00.863 "strip_size_kb": 0, 00:12:00.863 "state": "online", 00:12:00.863 "raid_level": "raid1", 00:12:00.863 "superblock": true, 00:12:00.863 "num_base_bdevs": 2, 00:12:00.863 "num_base_bdevs_discovered": 1, 00:12:00.863 "num_base_bdevs_operational": 1, 00:12:00.863 "base_bdevs_list": [ 00:12:00.863 { 00:12:00.863 "name": null, 00:12:00.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.863 "is_configured": false, 00:12:00.863 "data_offset": 0, 00:12:00.863 "data_size": 63488 00:12:00.863 }, 00:12:00.863 { 00:12:00.863 "name": "BaseBdev2", 00:12:00.863 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:12:00.863 "is_configured": true, 00:12:00.863 "data_offset": 2048, 00:12:00.863 "data_size": 63488 00:12:00.863 } 00:12:00.863 ] 00:12:00.863 }' 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 [2024-11-21 04:58:17.513644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.863 [2024-11-21 04:58:17.513805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:00.863 [2024-11-21 04:58:17.513820] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:00.863 request: 00:12:00.863 { 00:12:00.863 "base_bdev": "BaseBdev1", 00:12:00.863 "raid_bdev": "raid_bdev1", 00:12:00.863 "method": "bdev_raid_add_base_bdev", 00:12:00.863 "req_id": 1 00:12:00.863 } 00:12:00.863 Got JSON-RPC error response 00:12:00.863 response: 00:12:00.863 { 00:12:00.863 "code": -22, 00:12:00.863 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:00.863 } 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.863 04:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:01.802 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.802 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.802 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.802 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.802 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.803 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.062 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.062 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.062 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.062 "name": "raid_bdev1", 00:12:02.062 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:12:02.062 "strip_size_kb": 0, 00:12:02.062 "state": "online", 00:12:02.062 "raid_level": "raid1", 00:12:02.062 "superblock": true, 00:12:02.062 "num_base_bdevs": 2, 00:12:02.062 "num_base_bdevs_discovered": 1, 00:12:02.063 "num_base_bdevs_operational": 1, 00:12:02.063 "base_bdevs_list": [ 00:12:02.063 { 00:12:02.063 "name": null, 00:12:02.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.063 "is_configured": false, 00:12:02.063 "data_offset": 0, 00:12:02.063 "data_size": 63488 00:12:02.063 }, 00:12:02.063 { 00:12:02.063 "name": "BaseBdev2", 00:12:02.063 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:12:02.063 "is_configured": true, 00:12:02.063 "data_offset": 2048, 00:12:02.063 "data_size": 63488 00:12:02.063 } 00:12:02.063 ] 00:12:02.063 }' 00:12:02.063 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.063 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.321 "name": "raid_bdev1", 00:12:02.321 "uuid": "0df59ced-cd54-4a62-9543-de4c2b4b0c03", 00:12:02.321 "strip_size_kb": 0, 00:12:02.321 "state": "online", 00:12:02.321 "raid_level": "raid1", 00:12:02.321 "superblock": true, 00:12:02.321 "num_base_bdevs": 2, 00:12:02.321 "num_base_bdevs_discovered": 1, 00:12:02.321 "num_base_bdevs_operational": 1, 00:12:02.321 "base_bdevs_list": [ 00:12:02.321 { 00:12:02.321 "name": null, 00:12:02.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.321 "is_configured": false, 00:12:02.321 "data_offset": 0, 00:12:02.321 "data_size": 63488 00:12:02.321 }, 00:12:02.321 { 00:12:02.321 "name": "BaseBdev2", 00:12:02.321 "uuid": "d4ae498d-8c2e-5f38-9919-936f219adf01", 00:12:02.321 "is_configured": true, 00:12:02.321 "data_offset": 2048, 00:12:02.321 "data_size": 63488 00:12:02.321 } 00:12:02.321 ] 00:12:02.321 }' 00:12:02.321 04:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.321 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.321 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87652 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 87652 ']' 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 87652 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87652 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87652' 00:12:02.581 killing process with pid 87652 00:12:02.581 Received shutdown signal, test time was about 16.427621 seconds 00:12:02.581 00:12:02.581 Latency(us) 00:12:02.581 [2024-11-21T04:58:19.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.581 [2024-11-21T04:58:19.316Z] =================================================================================================================== 00:12:02.581 [2024-11-21T04:58:19.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 87652 00:12:02.581 [2024-11-21 04:58:19.108275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.581 [2024-11-21 04:58:19.108419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.581 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 87652 00:12:02.581 [2024-11-21 04:58:19.108475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.581 [2024-11-21 04:58:19.108489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:02.581 [2024-11-21 04:58:19.134637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:02.841 00:12:02.841 real 0m18.255s 00:12:02.841 user 0m24.272s 00:12:02.841 sys 0m2.059s 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.841 ************************************ 00:12:02.841 END TEST raid_rebuild_test_sb_io 00:12:02.841 ************************************ 00:12:02.841 04:58:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:02.841 04:58:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:02.841 04:58:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:02.841 04:58:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.841 04:58:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.841 ************************************ 00:12:02.841 START TEST raid_rebuild_test 00:12:02.841 ************************************ 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88324 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88324 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 88324 ']' 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.841 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.842 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.842 04:58:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 [2024-11-21 04:58:19.525791] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:12:02.842 [2024-11-21 04:58:19.526062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:02.842 Zero copy mechanism will not be used. 00:12:02.842 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88324 ] 00:12:03.101 [2024-11-21 04:58:19.703624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.101 [2024-11-21 04:58:19.730766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.101 [2024-11-21 04:58:19.772683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.101 [2024-11-21 04:58:19.772716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.672 BaseBdev1_malloc 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.672 [2024-11-21 04:58:20.346410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.672 [2024-11-21 04:58:20.346492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.672 [2024-11-21 04:58:20.346519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.672 [2024-11-21 04:58:20.346543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.672 [2024-11-21 04:58:20.348691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.672 [2024-11-21 04:58:20.348727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.672 BaseBdev1 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.672 BaseBdev2_malloc 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.672 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.672 [2024-11-21 04:58:20.374907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:03.672 [2024-11-21 04:58:20.374959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.673 [2024-11-21 04:58:20.374996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.673 [2024-11-21 04:58:20.375004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.673 [2024-11-21 04:58:20.377136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.673 [2024-11-21 04:58:20.377166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.673 BaseBdev2 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.673 BaseBdev3_malloc 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.673 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.673 [2024-11-21 04:58:20.403394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:03.673 [2024-11-21 04:58:20.403448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.673 [2024-11-21 04:58:20.403469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.673 [2024-11-21 04:58:20.403478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.931 [2024-11-21 04:58:20.405613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.932 [2024-11-21 04:58:20.405647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:03.932 BaseBdev3 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 BaseBdev4_malloc 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 [2024-11-21 04:58:20.442466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:03.932 [2024-11-21 04:58:20.442570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.932 [2024-11-21 04:58:20.442596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.932 [2024-11-21 04:58:20.442605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.932 [2024-11-21 04:58:20.444729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.932 [2024-11-21 04:58:20.444768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:03.932 BaseBdev4 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 spare_malloc 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 spare_delay 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 [2024-11-21 04:58:20.482841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.932 [2024-11-21 04:58:20.482892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.932 [2024-11-21 04:58:20.482929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:03.932 [2024-11-21 04:58:20.482937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.932 [2024-11-21 04:58:20.485038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.932 [2024-11-21 04:58:20.485125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.932 spare 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 [2024-11-21 04:58:20.494881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.932 [2024-11-21 04:58:20.496605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.932 [2024-11-21 04:58:20.496675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.932 [2024-11-21 04:58:20.496714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.932 [2024-11-21 04:58:20.496790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:03.932 [2024-11-21 04:58:20.496799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.932 [2024-11-21 04:58:20.497018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.932 [2024-11-21 04:58:20.497217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:03.932 [2024-11-21 04:58:20.497251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:03.932 [2024-11-21 04:58:20.497420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.932 "name": "raid_bdev1", 00:12:03.932 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:03.932 "strip_size_kb": 0, 00:12:03.932 "state": "online", 00:12:03.932 "raid_level": "raid1", 00:12:03.932 "superblock": false, 00:12:03.932 "num_base_bdevs": 4, 00:12:03.932 "num_base_bdevs_discovered": 4, 00:12:03.932 "num_base_bdevs_operational": 4, 00:12:03.932 "base_bdevs_list": [ 00:12:03.932 { 00:12:03.932 "name": "BaseBdev1", 00:12:03.932 "uuid": "6d9344ca-7a6e-516c-a495-605a0543a4fe", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 0, 00:12:03.932 "data_size": 65536 00:12:03.932 }, 00:12:03.932 { 00:12:03.932 "name": "BaseBdev2", 00:12:03.932 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 0, 00:12:03.932 "data_size": 65536 00:12:03.932 }, 00:12:03.932 { 00:12:03.932 "name": "BaseBdev3", 00:12:03.932 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 0, 00:12:03.932 "data_size": 65536 00:12:03.932 }, 00:12:03.932 { 00:12:03.932 "name": "BaseBdev4", 00:12:03.932 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 0, 00:12:03.932 "data_size": 65536 00:12:03.932 } 00:12:03.932 ] 00:12:03.932 }' 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.932 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.191 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.191 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.191 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.191 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:04.191 [2024-11-21 04:58:20.906536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.191 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.449 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.450 04:58:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:04.450 [2024-11-21 04:58:21.161832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:04.450 /dev/nbd0 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.709 1+0 records in 00:12:04.709 1+0 records out 00:12:04.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675376 s, 6.1 MB/s 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:04.709 04:58:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:11.280 65536+0 records in 00:12:11.280 65536+0 records out 00:12:11.280 33554432 bytes (34 MB, 32 MiB) copied, 6.2419 s, 5.4 MB/s 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:11.280 [2024-11-21 04:58:27.678657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.280 [2024-11-21 04:58:27.722547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.280 "name": "raid_bdev1", 00:12:11.280 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:11.280 "strip_size_kb": 0, 00:12:11.280 "state": "online", 00:12:11.280 "raid_level": "raid1", 00:12:11.280 "superblock": false, 00:12:11.280 "num_base_bdevs": 4, 00:12:11.280 "num_base_bdevs_discovered": 3, 00:12:11.280 "num_base_bdevs_operational": 3, 00:12:11.280 "base_bdevs_list": [ 00:12:11.280 { 00:12:11.280 "name": null, 00:12:11.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.280 "is_configured": false, 00:12:11.280 "data_offset": 0, 00:12:11.280 "data_size": 65536 00:12:11.280 }, 00:12:11.280 { 00:12:11.280 "name": "BaseBdev2", 00:12:11.280 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:11.280 "is_configured": true, 00:12:11.280 "data_offset": 0, 00:12:11.280 "data_size": 65536 00:12:11.280 }, 00:12:11.280 { 00:12:11.280 "name": "BaseBdev3", 00:12:11.280 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:11.280 "is_configured": true, 00:12:11.280 "data_offset": 0, 00:12:11.280 "data_size": 65536 00:12:11.280 }, 00:12:11.280 { 00:12:11.280 "name": "BaseBdev4", 00:12:11.280 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:11.280 "is_configured": true, 00:12:11.280 "data_offset": 0, 00:12:11.280 "data_size": 65536 00:12:11.280 } 00:12:11.280 ] 00:12:11.280 }' 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.280 04:58:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.540 04:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.540 04:58:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.540 04:58:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.540 [2024-11-21 04:58:28.141844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.540 [2024-11-21 04:58:28.146104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:11.540 04:58:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.540 04:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:11.540 [2024-11-21 04:58:28.148057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.477 "name": "raid_bdev1", 00:12:12.477 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:12.477 "strip_size_kb": 0, 00:12:12.477 "state": "online", 00:12:12.477 "raid_level": "raid1", 00:12:12.477 "superblock": false, 00:12:12.477 "num_base_bdevs": 4, 00:12:12.477 "num_base_bdevs_discovered": 4, 00:12:12.477 "num_base_bdevs_operational": 4, 00:12:12.477 "process": { 00:12:12.477 "type": "rebuild", 00:12:12.477 "target": "spare", 00:12:12.477 "progress": { 00:12:12.477 "blocks": 20480, 00:12:12.477 "percent": 31 00:12:12.477 } 00:12:12.477 }, 00:12:12.477 "base_bdevs_list": [ 00:12:12.477 { 00:12:12.477 "name": "spare", 00:12:12.477 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:12.477 "is_configured": true, 00:12:12.477 "data_offset": 0, 00:12:12.477 "data_size": 65536 00:12:12.477 }, 00:12:12.477 { 00:12:12.477 "name": "BaseBdev2", 00:12:12.477 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:12.477 "is_configured": true, 00:12:12.477 "data_offset": 0, 00:12:12.477 "data_size": 65536 00:12:12.477 }, 00:12:12.477 { 00:12:12.477 "name": "BaseBdev3", 00:12:12.477 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:12.477 "is_configured": true, 00:12:12.477 "data_offset": 0, 00:12:12.477 "data_size": 65536 00:12:12.477 }, 00:12:12.477 { 00:12:12.477 "name": "BaseBdev4", 00:12:12.477 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:12.477 "is_configured": true, 00:12:12.477 "data_offset": 0, 00:12:12.477 "data_size": 65536 00:12:12.477 } 00:12:12.477 ] 00:12:12.477 }' 00:12:12.477 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.737 [2024-11-21 04:58:29.308747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.737 [2024-11-21 04:58:29.353223] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:12.737 [2024-11-21 04:58:29.353334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.737 [2024-11-21 04:58:29.353357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.737 [2024-11-21 04:58:29.353364] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.737 "name": "raid_bdev1", 00:12:12.737 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:12.737 "strip_size_kb": 0, 00:12:12.737 "state": "online", 00:12:12.737 "raid_level": "raid1", 00:12:12.737 "superblock": false, 00:12:12.737 "num_base_bdevs": 4, 00:12:12.737 "num_base_bdevs_discovered": 3, 00:12:12.737 "num_base_bdevs_operational": 3, 00:12:12.737 "base_bdevs_list": [ 00:12:12.737 { 00:12:12.737 "name": null, 00:12:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.737 "is_configured": false, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 65536 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev2", 00:12:12.737 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:12.737 "is_configured": true, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 65536 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev3", 00:12:12.737 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:12.737 "is_configured": true, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 65536 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev4", 00:12:12.737 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:12.737 "is_configured": true, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 65536 00:12:12.737 } 00:12:12.737 ] 00:12:12.737 }' 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.737 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.306 "name": "raid_bdev1", 00:12:13.306 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:13.306 "strip_size_kb": 0, 00:12:13.306 "state": "online", 00:12:13.306 "raid_level": "raid1", 00:12:13.306 "superblock": false, 00:12:13.306 "num_base_bdevs": 4, 00:12:13.306 "num_base_bdevs_discovered": 3, 00:12:13.306 "num_base_bdevs_operational": 3, 00:12:13.306 "base_bdevs_list": [ 00:12:13.306 { 00:12:13.306 "name": null, 00:12:13.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.306 "is_configured": false, 00:12:13.306 "data_offset": 0, 00:12:13.306 "data_size": 65536 00:12:13.306 }, 00:12:13.306 { 00:12:13.306 "name": "BaseBdev2", 00:12:13.306 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:13.306 "is_configured": true, 00:12:13.306 "data_offset": 0, 00:12:13.306 "data_size": 65536 00:12:13.306 }, 00:12:13.306 { 00:12:13.306 "name": "BaseBdev3", 00:12:13.306 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:13.306 "is_configured": true, 00:12:13.306 "data_offset": 0, 00:12:13.306 "data_size": 65536 00:12:13.306 }, 00:12:13.306 { 00:12:13.306 "name": "BaseBdev4", 00:12:13.306 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:13.306 "is_configured": true, 00:12:13.306 "data_offset": 0, 00:12:13.306 "data_size": 65536 00:12:13.306 } 00:12:13.306 ] 00:12:13.306 }' 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.306 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.307 [2024-11-21 04:58:29.905007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.307 [2024-11-21 04:58:29.909408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.307 04:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:13.307 [2024-11-21 04:58:29.911539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.243 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.243 "name": "raid_bdev1", 00:12:14.243 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:14.243 "strip_size_kb": 0, 00:12:14.243 "state": "online", 00:12:14.243 "raid_level": "raid1", 00:12:14.243 "superblock": false, 00:12:14.243 "num_base_bdevs": 4, 00:12:14.243 "num_base_bdevs_discovered": 4, 00:12:14.243 "num_base_bdevs_operational": 4, 00:12:14.243 "process": { 00:12:14.243 "type": "rebuild", 00:12:14.243 "target": "spare", 00:12:14.243 "progress": { 00:12:14.243 "blocks": 20480, 00:12:14.243 "percent": 31 00:12:14.243 } 00:12:14.243 }, 00:12:14.243 "base_bdevs_list": [ 00:12:14.244 { 00:12:14.244 "name": "spare", 00:12:14.244 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:14.244 "is_configured": true, 00:12:14.244 "data_offset": 0, 00:12:14.244 "data_size": 65536 00:12:14.244 }, 00:12:14.244 { 00:12:14.244 "name": "BaseBdev2", 00:12:14.244 "uuid": "7d395d2f-c4ed-5e36-aa52-4b8ed985239b", 00:12:14.244 "is_configured": true, 00:12:14.244 "data_offset": 0, 00:12:14.244 "data_size": 65536 00:12:14.244 }, 00:12:14.244 { 00:12:14.244 "name": "BaseBdev3", 00:12:14.244 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:14.244 "is_configured": true, 00:12:14.244 "data_offset": 0, 00:12:14.244 "data_size": 65536 00:12:14.244 }, 00:12:14.244 { 00:12:14.244 "name": "BaseBdev4", 00:12:14.244 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:14.244 "is_configured": true, 00:12:14.244 "data_offset": 0, 00:12:14.244 "data_size": 65536 00:12:14.244 } 00:12:14.244 ] 00:12:14.244 }' 00:12:14.244 04:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.503 [2024-11-21 04:58:31.048254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.503 [2024-11-21 04:58:31.116097] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.503 "name": "raid_bdev1", 00:12:14.503 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:14.503 "strip_size_kb": 0, 00:12:14.503 "state": "online", 00:12:14.503 "raid_level": "raid1", 00:12:14.503 "superblock": false, 00:12:14.503 "num_base_bdevs": 4, 00:12:14.503 "num_base_bdevs_discovered": 3, 00:12:14.503 "num_base_bdevs_operational": 3, 00:12:14.503 "process": { 00:12:14.503 "type": "rebuild", 00:12:14.503 "target": "spare", 00:12:14.503 "progress": { 00:12:14.503 "blocks": 24576, 00:12:14.503 "percent": 37 00:12:14.503 } 00:12:14.503 }, 00:12:14.503 "base_bdevs_list": [ 00:12:14.503 { 00:12:14.503 "name": "spare", 00:12:14.503 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:14.503 "is_configured": true, 00:12:14.503 "data_offset": 0, 00:12:14.503 "data_size": 65536 00:12:14.503 }, 00:12:14.503 { 00:12:14.503 "name": null, 00:12:14.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.503 "is_configured": false, 00:12:14.503 "data_offset": 0, 00:12:14.503 "data_size": 65536 00:12:14.503 }, 00:12:14.503 { 00:12:14.503 "name": "BaseBdev3", 00:12:14.503 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:14.503 "is_configured": true, 00:12:14.503 "data_offset": 0, 00:12:14.503 "data_size": 65536 00:12:14.503 }, 00:12:14.503 { 00:12:14.503 "name": "BaseBdev4", 00:12:14.503 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:14.503 "is_configured": true, 00:12:14.503 "data_offset": 0, 00:12:14.503 "data_size": 65536 00:12:14.503 } 00:12:14.503 ] 00:12:14.503 }' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:12:14.503 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.763 "name": "raid_bdev1", 00:12:14.763 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:14.763 "strip_size_kb": 0, 00:12:14.763 "state": "online", 00:12:14.763 "raid_level": "raid1", 00:12:14.763 "superblock": false, 00:12:14.763 "num_base_bdevs": 4, 00:12:14.763 "num_base_bdevs_discovered": 3, 00:12:14.763 "num_base_bdevs_operational": 3, 00:12:14.763 "process": { 00:12:14.763 "type": "rebuild", 00:12:14.763 "target": "spare", 00:12:14.763 "progress": { 00:12:14.763 "blocks": 26624, 00:12:14.763 "percent": 40 00:12:14.763 } 00:12:14.763 }, 00:12:14.763 "base_bdevs_list": [ 00:12:14.763 { 00:12:14.763 "name": "spare", 00:12:14.763 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:14.763 "is_configured": true, 00:12:14.763 "data_offset": 0, 00:12:14.763 "data_size": 65536 00:12:14.763 }, 00:12:14.763 { 00:12:14.763 "name": null, 00:12:14.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.763 "is_configured": false, 00:12:14.763 "data_offset": 0, 00:12:14.763 "data_size": 65536 00:12:14.763 }, 00:12:14.763 { 00:12:14.763 "name": "BaseBdev3", 00:12:14.763 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:14.763 "is_configured": true, 00:12:14.763 "data_offset": 0, 00:12:14.763 "data_size": 65536 00:12:14.763 }, 00:12:14.763 { 00:12:14.763 "name": "BaseBdev4", 00:12:14.763 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:14.763 "is_configured": true, 00:12:14.763 "data_offset": 0, 00:12:14.763 "data_size": 65536 00:12:14.763 } 00:12:14.763 ] 00:12:14.763 }' 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.763 04:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.703 "name": "raid_bdev1", 00:12:15.703 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:15.703 "strip_size_kb": 0, 00:12:15.703 "state": "online", 00:12:15.703 "raid_level": "raid1", 00:12:15.703 "superblock": false, 00:12:15.703 "num_base_bdevs": 4, 00:12:15.703 "num_base_bdevs_discovered": 3, 00:12:15.703 "num_base_bdevs_operational": 3, 00:12:15.703 "process": { 00:12:15.703 "type": "rebuild", 00:12:15.703 "target": "spare", 00:12:15.703 "progress": { 00:12:15.703 "blocks": 49152, 00:12:15.703 "percent": 75 00:12:15.703 } 00:12:15.703 }, 00:12:15.703 "base_bdevs_list": [ 00:12:15.703 { 00:12:15.703 "name": "spare", 00:12:15.703 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:15.703 "is_configured": true, 00:12:15.703 "data_offset": 0, 00:12:15.703 "data_size": 65536 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "name": null, 00:12:15.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.703 "is_configured": false, 00:12:15.703 "data_offset": 0, 00:12:15.703 "data_size": 65536 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "name": "BaseBdev3", 00:12:15.703 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:15.703 "is_configured": true, 00:12:15.703 "data_offset": 0, 00:12:15.703 "data_size": 65536 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "name": "BaseBdev4", 00:12:15.703 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:15.703 "is_configured": true, 00:12:15.703 "data_offset": 0, 00:12:15.703 "data_size": 65536 00:12:15.703 } 00:12:15.703 ] 00:12:15.703 }' 00:12:15.703 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.962 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.962 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.962 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.962 04:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.531 [2024-11-21 04:58:33.124125] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:16.531 [2024-11-21 04:58:33.124232] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:16.531 [2024-11-21 04:58:33.124283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.790 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.050 "name": "raid_bdev1", 00:12:17.050 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:17.050 "strip_size_kb": 0, 00:12:17.050 "state": "online", 00:12:17.050 "raid_level": "raid1", 00:12:17.050 "superblock": false, 00:12:17.050 "num_base_bdevs": 4, 00:12:17.050 "num_base_bdevs_discovered": 3, 00:12:17.050 "num_base_bdevs_operational": 3, 00:12:17.050 "base_bdevs_list": [ 00:12:17.050 { 00:12:17.050 "name": "spare", 00:12:17.050 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": null, 00:12:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.050 "is_configured": false, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": "BaseBdev3", 00:12:17.050 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": "BaseBdev4", 00:12:17.050 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 } 00:12:17.050 ] 00:12:17.050 }' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.050 "name": "raid_bdev1", 00:12:17.050 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:17.050 "strip_size_kb": 0, 00:12:17.050 "state": "online", 00:12:17.050 "raid_level": "raid1", 00:12:17.050 "superblock": false, 00:12:17.050 "num_base_bdevs": 4, 00:12:17.050 "num_base_bdevs_discovered": 3, 00:12:17.050 "num_base_bdevs_operational": 3, 00:12:17.050 "base_bdevs_list": [ 00:12:17.050 { 00:12:17.050 "name": "spare", 00:12:17.050 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": null, 00:12:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.050 "is_configured": false, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": "BaseBdev3", 00:12:17.050 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 }, 00:12:17.050 { 00:12:17.050 "name": "BaseBdev4", 00:12:17.050 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:17.050 "is_configured": true, 00:12:17.050 "data_offset": 0, 00:12:17.050 "data_size": 65536 00:12:17.050 } 00:12:17.050 ] 00:12:17.050 }' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.050 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.310 "name": "raid_bdev1", 00:12:17.310 "uuid": "7083e774-e53f-4c5a-af24-ff36443efd86", 00:12:17.310 "strip_size_kb": 0, 00:12:17.310 "state": "online", 00:12:17.310 "raid_level": "raid1", 00:12:17.310 "superblock": false, 00:12:17.310 "num_base_bdevs": 4, 00:12:17.310 "num_base_bdevs_discovered": 3, 00:12:17.310 "num_base_bdevs_operational": 3, 00:12:17.310 "base_bdevs_list": [ 00:12:17.310 { 00:12:17.310 "name": "spare", 00:12:17.310 "uuid": "2fb3154c-39df-51b7-bc17-fd1e33f5f6de", 00:12:17.310 "is_configured": true, 00:12:17.310 "data_offset": 0, 00:12:17.310 "data_size": 65536 00:12:17.310 }, 00:12:17.310 { 00:12:17.310 "name": null, 00:12:17.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.310 "is_configured": false, 00:12:17.310 "data_offset": 0, 00:12:17.310 "data_size": 65536 00:12:17.310 }, 00:12:17.310 { 00:12:17.310 "name": "BaseBdev3", 00:12:17.310 "uuid": "bc58fc89-0c81-5a68-8fc1-fe3a18ee75cc", 00:12:17.310 "is_configured": true, 00:12:17.310 "data_offset": 0, 00:12:17.310 "data_size": 65536 00:12:17.310 }, 00:12:17.310 { 00:12:17.310 "name": "BaseBdev4", 00:12:17.310 "uuid": "a308729e-3d0c-54a5-9ff1-636163a52726", 00:12:17.310 "is_configured": true, 00:12:17.310 "data_offset": 0, 00:12:17.310 "data_size": 65536 00:12:17.310 } 00:12:17.310 ] 00:12:17.310 }' 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.310 04:58:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.570 [2024-11-21 04:58:34.195003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.570 [2024-11-21 04:58:34.195099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.570 [2024-11-21 04:58:34.195228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.570 [2024-11-21 04:58:34.195413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.570 [2024-11-21 04:58:34.195443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.570 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:17.829 /dev/nbd0 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.829 1+0 records in 00:12:17.829 1+0 records out 00:12:17.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210523 s, 19.5 MB/s 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.829 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:18.088 /dev/nbd1 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.088 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.089 1+0 records in 00:12:18.089 1+0 records out 00:12:18.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353921 s, 11.6 MB/s 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.089 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.348 04:58:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.348 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88324 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 88324 ']' 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 88324 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88324 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.607 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88324' 00:12:18.607 killing process with pid 88324 00:12:18.607 Received shutdown signal, test time was about 60.000000 seconds 00:12:18.607 00:12:18.607 Latency(us) 00:12:18.607 [2024-11-21T04:58:35.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.607 [2024-11-21T04:58:35.343Z] =================================================================================================================== 00:12:18.608 [2024-11-21T04:58:35.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.608 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 88324 00:12:18.608 [2024-11-21 04:58:35.270705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.608 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 88324 00:12:18.608 [2024-11-21 04:58:35.321460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.867 04:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:18.867 00:12:18.867 real 0m16.114s 00:12:18.867 user 0m17.434s 00:12:18.867 sys 0m3.125s 00:12:18.867 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.867 04:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 ************************************ 00:12:18.867 END TEST raid_rebuild_test 00:12:18.867 ************************************ 00:12:18.867 04:58:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:18.867 04:58:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:18.867 04:58:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.867 04:58:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 ************************************ 00:12:18.867 START TEST raid_rebuild_test_sb 00:12:18.867 ************************************ 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88758 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88758 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88758 ']' 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.127 04:58:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:19.127 Zero copy mechanism will not be used. 00:12:19.127 [2024-11-21 04:58:35.694810] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:12:19.127 [2024-11-21 04:58:35.694954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88758 ] 00:12:19.387 [2024-11-21 04:58:35.863365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.387 [2024-11-21 04:58:35.889084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.387 [2024-11-21 04:58:35.931704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.387 [2024-11-21 04:58:35.931749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 BaseBdev1_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 [2024-11-21 04:58:36.545896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.957 [2024-11-21 04:58:36.546010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.957 [2024-11-21 04:58:36.546057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.957 [2024-11-21 04:58:36.546069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.957 [2024-11-21 04:58:36.548260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.957 [2024-11-21 04:58:36.548296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.957 BaseBdev1 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 BaseBdev2_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 [2024-11-21 04:58:36.574708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:19.957 [2024-11-21 04:58:36.574761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.957 [2024-11-21 04:58:36.574780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.957 [2024-11-21 04:58:36.574788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.957 [2024-11-21 04:58:36.576864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.957 [2024-11-21 04:58:36.576941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.957 BaseBdev2 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 BaseBdev3_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 [2024-11-21 04:58:36.603276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:19.957 [2024-11-21 04:58:36.603386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.957 [2024-11-21 04:58:36.603412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:19.957 [2024-11-21 04:58:36.603421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.957 [2024-11-21 04:58:36.605486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.957 [2024-11-21 04:58:36.605520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:19.957 BaseBdev3 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 BaseBdev4_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 [2024-11-21 04:58:36.641685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:19.957 [2024-11-21 04:58:36.641777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.957 [2024-11-21 04:58:36.641807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:19.957 [2024-11-21 04:58:36.641817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.957 [2024-11-21 04:58:36.643965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.957 [2024-11-21 04:58:36.643992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:19.957 BaseBdev4 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 spare_malloc 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 spare_delay 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.957 [2024-11-21 04:58:36.682445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.957 [2024-11-21 04:58:36.682539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.957 [2024-11-21 04:58:36.682580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:19.957 [2024-11-21 04:58:36.682608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.957 [2024-11-21 04:58:36.684750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.957 [2024-11-21 04:58:36.684824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.957 spare 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:19.957 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.958 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.217 [2024-11-21 04:58:36.694492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.217 [2024-11-21 04:58:36.696402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.217 [2024-11-21 04:58:36.696541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.217 [2024-11-21 04:58:36.696608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.217 [2024-11-21 04:58:36.696844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:20.217 [2024-11-21 04:58:36.696893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.217 [2024-11-21 04:58:36.697221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:20.217 [2024-11-21 04:58:36.697425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:20.217 [2024-11-21 04:58:36.697477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:20.217 [2024-11-21 04:58:36.697667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.217 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.217 "name": "raid_bdev1", 00:12:20.218 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:20.218 "strip_size_kb": 0, 00:12:20.218 "state": "online", 00:12:20.218 "raid_level": "raid1", 00:12:20.218 "superblock": true, 00:12:20.218 "num_base_bdevs": 4, 00:12:20.218 "num_base_bdevs_discovered": 4, 00:12:20.218 "num_base_bdevs_operational": 4, 00:12:20.218 "base_bdevs_list": [ 00:12:20.218 { 00:12:20.218 "name": "BaseBdev1", 00:12:20.218 "uuid": "57dfbc2d-7631-51cb-a512-b6ff64124095", 00:12:20.218 "is_configured": true, 00:12:20.218 "data_offset": 2048, 00:12:20.218 "data_size": 63488 00:12:20.218 }, 00:12:20.218 { 00:12:20.218 "name": "BaseBdev2", 00:12:20.218 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:20.218 "is_configured": true, 00:12:20.218 "data_offset": 2048, 00:12:20.218 "data_size": 63488 00:12:20.218 }, 00:12:20.218 { 00:12:20.218 "name": "BaseBdev3", 00:12:20.218 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:20.218 "is_configured": true, 00:12:20.218 "data_offset": 2048, 00:12:20.218 "data_size": 63488 00:12:20.218 }, 00:12:20.218 { 00:12:20.218 "name": "BaseBdev4", 00:12:20.218 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:20.218 "is_configured": true, 00:12:20.218 "data_offset": 2048, 00:12:20.218 "data_size": 63488 00:12:20.218 } 00:12:20.218 ] 00:12:20.218 }' 00:12:20.218 04:58:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.218 04:58:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.477 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.477 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.477 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.477 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:20.477 [2024-11-21 04:58:37.189970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.477 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.736 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:20.736 [2024-11-21 04:58:37.469233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:20.996 /dev/nbd0 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.996 1+0 records in 00:12:20.996 1+0 records out 00:12:20.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032155 s, 12.7 MB/s 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.996 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:20.997 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:20.997 04:58:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:26.271 63488+0 records in 00:12:26.271 63488+0 records out 00:12:26.271 32505856 bytes (33 MB, 31 MiB) copied, 4.92147 s, 6.6 MB/s 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:26.271 [2024-11-21 04:58:42.693399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.271 [2024-11-21 04:58:42.713481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.271 "name": "raid_bdev1", 00:12:26.271 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:26.271 "strip_size_kb": 0, 00:12:26.271 "state": "online", 00:12:26.271 "raid_level": "raid1", 00:12:26.271 "superblock": true, 00:12:26.271 "num_base_bdevs": 4, 00:12:26.271 "num_base_bdevs_discovered": 3, 00:12:26.271 "num_base_bdevs_operational": 3, 00:12:26.271 "base_bdevs_list": [ 00:12:26.271 { 00:12:26.271 "name": null, 00:12:26.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.271 "is_configured": false, 00:12:26.271 "data_offset": 0, 00:12:26.271 "data_size": 63488 00:12:26.271 }, 00:12:26.271 { 00:12:26.271 "name": "BaseBdev2", 00:12:26.271 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:26.271 "is_configured": true, 00:12:26.271 "data_offset": 2048, 00:12:26.271 "data_size": 63488 00:12:26.271 }, 00:12:26.271 { 00:12:26.271 "name": "BaseBdev3", 00:12:26.271 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:26.271 "is_configured": true, 00:12:26.271 "data_offset": 2048, 00:12:26.271 "data_size": 63488 00:12:26.271 }, 00:12:26.271 { 00:12:26.271 "name": "BaseBdev4", 00:12:26.271 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:26.271 "is_configured": true, 00:12:26.271 "data_offset": 2048, 00:12:26.271 "data_size": 63488 00:12:26.271 } 00:12:26.271 ] 00:12:26.271 }' 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.271 04:58:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 04:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.534 04:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.534 04:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 [2024-11-21 04:58:43.172719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.534 [2024-11-21 04:58:43.177064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:26.534 04:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.534 04:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:26.534 [2024-11-21 04:58:43.179188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.472 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.732 "name": "raid_bdev1", 00:12:27.732 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:27.732 "strip_size_kb": 0, 00:12:27.732 "state": "online", 00:12:27.732 "raid_level": "raid1", 00:12:27.732 "superblock": true, 00:12:27.732 "num_base_bdevs": 4, 00:12:27.732 "num_base_bdevs_discovered": 4, 00:12:27.732 "num_base_bdevs_operational": 4, 00:12:27.732 "process": { 00:12:27.732 "type": "rebuild", 00:12:27.732 "target": "spare", 00:12:27.732 "progress": { 00:12:27.732 "blocks": 20480, 00:12:27.732 "percent": 32 00:12:27.732 } 00:12:27.732 }, 00:12:27.732 "base_bdevs_list": [ 00:12:27.732 { 00:12:27.732 "name": "spare", 00:12:27.732 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev2", 00:12:27.732 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev3", 00:12:27.732 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev4", 00:12:27.732 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 } 00:12:27.732 ] 00:12:27.732 }' 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.732 [2024-11-21 04:58:44.347763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.732 [2024-11-21 04:58:44.384984] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.732 [2024-11-21 04:58:44.385067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.732 [2024-11-21 04:58:44.385101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.732 [2024-11-21 04:58:44.385109] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.732 "name": "raid_bdev1", 00:12:27.732 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:27.732 "strip_size_kb": 0, 00:12:27.732 "state": "online", 00:12:27.732 "raid_level": "raid1", 00:12:27.732 "superblock": true, 00:12:27.732 "num_base_bdevs": 4, 00:12:27.732 "num_base_bdevs_discovered": 3, 00:12:27.732 "num_base_bdevs_operational": 3, 00:12:27.732 "base_bdevs_list": [ 00:12:27.732 { 00:12:27.732 "name": null, 00:12:27.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.732 "is_configured": false, 00:12:27.732 "data_offset": 0, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev2", 00:12:27.732 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev3", 00:12:27.732 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 }, 00:12:27.732 { 00:12:27.732 "name": "BaseBdev4", 00:12:27.732 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:27.732 "is_configured": true, 00:12:27.732 "data_offset": 2048, 00:12:27.732 "data_size": 63488 00:12:27.732 } 00:12:27.732 ] 00:12:27.732 }' 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.732 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.301 "name": "raid_bdev1", 00:12:28.301 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:28.301 "strip_size_kb": 0, 00:12:28.301 "state": "online", 00:12:28.301 "raid_level": "raid1", 00:12:28.301 "superblock": true, 00:12:28.301 "num_base_bdevs": 4, 00:12:28.301 "num_base_bdevs_discovered": 3, 00:12:28.301 "num_base_bdevs_operational": 3, 00:12:28.301 "base_bdevs_list": [ 00:12:28.301 { 00:12:28.301 "name": null, 00:12:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.301 "is_configured": false, 00:12:28.301 "data_offset": 0, 00:12:28.301 "data_size": 63488 00:12:28.301 }, 00:12:28.301 { 00:12:28.301 "name": "BaseBdev2", 00:12:28.301 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:28.301 "is_configured": true, 00:12:28.301 "data_offset": 2048, 00:12:28.301 "data_size": 63488 00:12:28.301 }, 00:12:28.301 { 00:12:28.301 "name": "BaseBdev3", 00:12:28.301 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:28.301 "is_configured": true, 00:12:28.301 "data_offset": 2048, 00:12:28.301 "data_size": 63488 00:12:28.301 }, 00:12:28.301 { 00:12:28.301 "name": "BaseBdev4", 00:12:28.301 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:28.301 "is_configured": true, 00:12:28.301 "data_offset": 2048, 00:12:28.301 "data_size": 63488 00:12:28.301 } 00:12:28.301 ] 00:12:28.301 }' 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.301 [2024-11-21 04:58:44.952961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.301 [2024-11-21 04:58:44.957036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.301 04:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:28.301 [2024-11-21 04:58:44.958933] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.238 04:58:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 04:58:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.497 "name": "raid_bdev1", 00:12:29.497 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:29.497 "strip_size_kb": 0, 00:12:29.497 "state": "online", 00:12:29.497 "raid_level": "raid1", 00:12:29.497 "superblock": true, 00:12:29.497 "num_base_bdevs": 4, 00:12:29.497 "num_base_bdevs_discovered": 4, 00:12:29.497 "num_base_bdevs_operational": 4, 00:12:29.497 "process": { 00:12:29.497 "type": "rebuild", 00:12:29.497 "target": "spare", 00:12:29.497 "progress": { 00:12:29.497 "blocks": 20480, 00:12:29.497 "percent": 32 00:12:29.497 } 00:12:29.497 }, 00:12:29.497 "base_bdevs_list": [ 00:12:29.497 { 00:12:29.497 "name": "spare", 00:12:29.497 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:29.497 "is_configured": true, 00:12:29.497 "data_offset": 2048, 00:12:29.497 "data_size": 63488 00:12:29.497 }, 00:12:29.497 { 00:12:29.497 "name": "BaseBdev2", 00:12:29.497 "uuid": "30b29814-c89d-5352-a17b-548703565850", 00:12:29.497 "is_configured": true, 00:12:29.497 "data_offset": 2048, 00:12:29.497 "data_size": 63488 00:12:29.497 }, 00:12:29.497 { 00:12:29.497 "name": "BaseBdev3", 00:12:29.497 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:29.497 "is_configured": true, 00:12:29.497 "data_offset": 2048, 00:12:29.497 "data_size": 63488 00:12:29.497 }, 00:12:29.497 { 00:12:29.497 "name": "BaseBdev4", 00:12:29.497 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:29.497 "is_configured": true, 00:12:29.497 "data_offset": 2048, 00:12:29.497 "data_size": 63488 00:12:29.497 } 00:12:29.497 ] 00:12:29.497 }' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:29.497 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.497 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 [2024-11-21 04:58:46.100024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:29.757 [2024-11-21 04:58:46.263567] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.757 "name": "raid_bdev1", 00:12:29.757 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:29.757 "strip_size_kb": 0, 00:12:29.757 "state": "online", 00:12:29.757 "raid_level": "raid1", 00:12:29.757 "superblock": true, 00:12:29.757 "num_base_bdevs": 4, 00:12:29.757 "num_base_bdevs_discovered": 3, 00:12:29.757 "num_base_bdevs_operational": 3, 00:12:29.757 "process": { 00:12:29.757 "type": "rebuild", 00:12:29.757 "target": "spare", 00:12:29.757 "progress": { 00:12:29.757 "blocks": 24576, 00:12:29.757 "percent": 38 00:12:29.757 } 00:12:29.757 }, 00:12:29.757 "base_bdevs_list": [ 00:12:29.757 { 00:12:29.757 "name": "spare", 00:12:29.757 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": null, 00:12:29.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.757 "is_configured": false, 00:12:29.757 "data_offset": 0, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": "BaseBdev3", 00:12:29.757 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": "BaseBdev4", 00:12:29.757 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 } 00:12:29.757 ] 00:12:29.757 }' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.757 "name": "raid_bdev1", 00:12:29.757 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:29.757 "strip_size_kb": 0, 00:12:29.757 "state": "online", 00:12:29.757 "raid_level": "raid1", 00:12:29.757 "superblock": true, 00:12:29.757 "num_base_bdevs": 4, 00:12:29.757 "num_base_bdevs_discovered": 3, 00:12:29.757 "num_base_bdevs_operational": 3, 00:12:29.757 "process": { 00:12:29.757 "type": "rebuild", 00:12:29.757 "target": "spare", 00:12:29.757 "progress": { 00:12:29.757 "blocks": 26624, 00:12:29.757 "percent": 41 00:12:29.757 } 00:12:29.757 }, 00:12:29.757 "base_bdevs_list": [ 00:12:29.757 { 00:12:29.757 "name": "spare", 00:12:29.757 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": null, 00:12:29.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.757 "is_configured": false, 00:12:29.757 "data_offset": 0, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": "BaseBdev3", 00:12:29.757 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 }, 00:12:29.757 { 00:12:29.757 "name": "BaseBdev4", 00:12:29.757 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:29.757 "is_configured": true, 00:12:29.757 "data_offset": 2048, 00:12:29.757 "data_size": 63488 00:12:29.757 } 00:12:29.757 ] 00:12:29.757 }' 00:12:29.757 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.017 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.017 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.017 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.017 04:58:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.953 04:58:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.954 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.954 "name": "raid_bdev1", 00:12:30.954 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:30.954 "strip_size_kb": 0, 00:12:30.954 "state": "online", 00:12:30.954 "raid_level": "raid1", 00:12:30.954 "superblock": true, 00:12:30.954 "num_base_bdevs": 4, 00:12:30.954 "num_base_bdevs_discovered": 3, 00:12:30.954 "num_base_bdevs_operational": 3, 00:12:30.954 "process": { 00:12:30.954 "type": "rebuild", 00:12:30.954 "target": "spare", 00:12:30.954 "progress": { 00:12:30.954 "blocks": 51200, 00:12:30.954 "percent": 80 00:12:30.954 } 00:12:30.954 }, 00:12:30.954 "base_bdevs_list": [ 00:12:30.954 { 00:12:30.954 "name": "spare", 00:12:30.954 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:30.954 "is_configured": true, 00:12:30.954 "data_offset": 2048, 00:12:30.954 "data_size": 63488 00:12:30.954 }, 00:12:30.954 { 00:12:30.954 "name": null, 00:12:30.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.954 "is_configured": false, 00:12:30.954 "data_offset": 0, 00:12:30.954 "data_size": 63488 00:12:30.954 }, 00:12:30.954 { 00:12:30.954 "name": "BaseBdev3", 00:12:30.954 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:30.954 "is_configured": true, 00:12:30.954 "data_offset": 2048, 00:12:30.954 "data_size": 63488 00:12:30.954 }, 00:12:30.954 { 00:12:30.954 "name": "BaseBdev4", 00:12:30.954 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:30.954 "is_configured": true, 00:12:30.954 "data_offset": 2048, 00:12:30.954 "data_size": 63488 00:12:30.954 } 00:12:30.954 ] 00:12:30.954 }' 00:12:30.954 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.954 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.954 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.213 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.213 04:58:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.472 [2024-11-21 04:58:48.171894] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:31.473 [2024-11-21 04:58:48.171996] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:31.473 [2024-11-21 04:58:48.172144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.042 "name": "raid_bdev1", 00:12:32.042 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:32.042 "strip_size_kb": 0, 00:12:32.042 "state": "online", 00:12:32.042 "raid_level": "raid1", 00:12:32.042 "superblock": true, 00:12:32.042 "num_base_bdevs": 4, 00:12:32.042 "num_base_bdevs_discovered": 3, 00:12:32.042 "num_base_bdevs_operational": 3, 00:12:32.042 "base_bdevs_list": [ 00:12:32.042 { 00:12:32.042 "name": "spare", 00:12:32.042 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:32.042 "is_configured": true, 00:12:32.042 "data_offset": 2048, 00:12:32.042 "data_size": 63488 00:12:32.042 }, 00:12:32.042 { 00:12:32.042 "name": null, 00:12:32.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.042 "is_configured": false, 00:12:32.042 "data_offset": 0, 00:12:32.042 "data_size": 63488 00:12:32.042 }, 00:12:32.042 { 00:12:32.042 "name": "BaseBdev3", 00:12:32.042 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:32.042 "is_configured": true, 00:12:32.042 "data_offset": 2048, 00:12:32.042 "data_size": 63488 00:12:32.042 }, 00:12:32.042 { 00:12:32.042 "name": "BaseBdev4", 00:12:32.042 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:32.042 "is_configured": true, 00:12:32.042 "data_offset": 2048, 00:12:32.042 "data_size": 63488 00:12:32.042 } 00:12:32.042 ] 00:12:32.042 }' 00:12:32.042 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.302 "name": "raid_bdev1", 00:12:32.302 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:32.302 "strip_size_kb": 0, 00:12:32.302 "state": "online", 00:12:32.302 "raid_level": "raid1", 00:12:32.302 "superblock": true, 00:12:32.302 "num_base_bdevs": 4, 00:12:32.302 "num_base_bdevs_discovered": 3, 00:12:32.302 "num_base_bdevs_operational": 3, 00:12:32.302 "base_bdevs_list": [ 00:12:32.302 { 00:12:32.302 "name": "spare", 00:12:32.302 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": null, 00:12:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.302 "is_configured": false, 00:12:32.302 "data_offset": 0, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev3", 00:12:32.302 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev4", 00:12:32.302 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 } 00:12:32.302 ] 00:12:32.302 }' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.302 "name": "raid_bdev1", 00:12:32.302 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:32.302 "strip_size_kb": 0, 00:12:32.302 "state": "online", 00:12:32.302 "raid_level": "raid1", 00:12:32.302 "superblock": true, 00:12:32.302 "num_base_bdevs": 4, 00:12:32.302 "num_base_bdevs_discovered": 3, 00:12:32.302 "num_base_bdevs_operational": 3, 00:12:32.302 "base_bdevs_list": [ 00:12:32.302 { 00:12:32.302 "name": "spare", 00:12:32.302 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": null, 00:12:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.302 "is_configured": false, 00:12:32.302 "data_offset": 0, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev3", 00:12:32.302 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev4", 00:12:32.302 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 } 00:12:32.302 ] 00:12:32.302 }' 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.302 04:58:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 [2024-11-21 04:58:49.390879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.894 [2024-11-21 04:58:49.390910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.894 [2024-11-21 04:58:49.391031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.894 [2024-11-21 04:58:49.391126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.894 [2024-11-21 04:58:49.391141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.894 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:33.153 /dev/nbd0 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:33.153 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.154 1+0 records in 00:12:33.154 1+0 records out 00:12:33.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592699 s, 6.9 MB/s 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.154 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:33.413 /dev/nbd1 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.413 1+0 records in 00:12:33.413 1+0 records out 00:12:33.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430355 s, 9.5 MB/s 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.413 04:58:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.414 04:58:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.674 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.996 [2024-11-21 04:58:50.444906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.996 [2024-11-21 04:58:50.445025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.996 [2024-11-21 04:58:50.445069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:33.996 [2024-11-21 04:58:50.445101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.996 [2024-11-21 04:58:50.447456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.996 [2024-11-21 04:58:50.447497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.996 [2024-11-21 04:58:50.447588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.996 [2024-11-21 04:58:50.447650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.996 [2024-11-21 04:58:50.447763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.996 [2024-11-21 04:58:50.447850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.996 spare 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.996 [2024-11-21 04:58:50.547731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:33.996 [2024-11-21 04:58:50.547800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.996 [2024-11-21 04:58:50.548149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:33.996 [2024-11-21 04:58:50.548310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:33.996 [2024-11-21 04:58:50.548321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:33.996 [2024-11-21 04:58:50.548462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.996 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.997 "name": "raid_bdev1", 00:12:33.997 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:33.997 "strip_size_kb": 0, 00:12:33.997 "state": "online", 00:12:33.997 "raid_level": "raid1", 00:12:33.997 "superblock": true, 00:12:33.997 "num_base_bdevs": 4, 00:12:33.997 "num_base_bdevs_discovered": 3, 00:12:33.997 "num_base_bdevs_operational": 3, 00:12:33.997 "base_bdevs_list": [ 00:12:33.997 { 00:12:33.997 "name": "spare", 00:12:33.997 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:33.997 "is_configured": true, 00:12:33.997 "data_offset": 2048, 00:12:33.997 "data_size": 63488 00:12:33.997 }, 00:12:33.997 { 00:12:33.997 "name": null, 00:12:33.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.997 "is_configured": false, 00:12:33.997 "data_offset": 2048, 00:12:33.997 "data_size": 63488 00:12:33.997 }, 00:12:33.997 { 00:12:33.997 "name": "BaseBdev3", 00:12:33.997 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:33.997 "is_configured": true, 00:12:33.997 "data_offset": 2048, 00:12:33.997 "data_size": 63488 00:12:33.997 }, 00:12:33.997 { 00:12:33.997 "name": "BaseBdev4", 00:12:33.997 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:33.997 "is_configured": true, 00:12:33.997 "data_offset": 2048, 00:12:33.997 "data_size": 63488 00:12:33.997 } 00:12:33.997 ] 00:12:33.997 }' 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.997 04:58:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.565 "name": "raid_bdev1", 00:12:34.565 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:34.565 "strip_size_kb": 0, 00:12:34.565 "state": "online", 00:12:34.565 "raid_level": "raid1", 00:12:34.565 "superblock": true, 00:12:34.565 "num_base_bdevs": 4, 00:12:34.565 "num_base_bdevs_discovered": 3, 00:12:34.565 "num_base_bdevs_operational": 3, 00:12:34.565 "base_bdevs_list": [ 00:12:34.565 { 00:12:34.565 "name": "spare", 00:12:34.565 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:34.565 "is_configured": true, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": null, 00:12:34.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.565 "is_configured": false, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": "BaseBdev3", 00:12:34.565 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:34.565 "is_configured": true, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": "BaseBdev4", 00:12:34.565 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:34.565 "is_configured": true, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 } 00:12:34.565 ] 00:12:34.565 }' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 [2024-11-21 04:58:51.171721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.565 "name": "raid_bdev1", 00:12:34.565 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:34.565 "strip_size_kb": 0, 00:12:34.565 "state": "online", 00:12:34.565 "raid_level": "raid1", 00:12:34.565 "superblock": true, 00:12:34.565 "num_base_bdevs": 4, 00:12:34.565 "num_base_bdevs_discovered": 2, 00:12:34.565 "num_base_bdevs_operational": 2, 00:12:34.565 "base_bdevs_list": [ 00:12:34.565 { 00:12:34.565 "name": null, 00:12:34.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.565 "is_configured": false, 00:12:34.565 "data_offset": 0, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": null, 00:12:34.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.565 "is_configured": false, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": "BaseBdev3", 00:12:34.565 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:34.565 "is_configured": true, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 }, 00:12:34.565 { 00:12:34.565 "name": "BaseBdev4", 00:12:34.565 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:34.565 "is_configured": true, 00:12:34.565 "data_offset": 2048, 00:12:34.565 "data_size": 63488 00:12:34.565 } 00:12:34.565 ] 00:12:34.565 }' 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.565 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.135 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.135 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.135 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.135 [2024-11-21 04:58:51.607098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.135 [2024-11-21 04:58:51.607378] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:35.135 [2024-11-21 04:58:51.607448] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:35.135 [2024-11-21 04:58:51.607522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.135 [2024-11-21 04:58:51.611611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:35.135 04:58:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.135 04:58:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:35.135 [2024-11-21 04:58:51.613698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.075 "name": "raid_bdev1", 00:12:36.075 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:36.075 "strip_size_kb": 0, 00:12:36.075 "state": "online", 00:12:36.075 "raid_level": "raid1", 00:12:36.075 "superblock": true, 00:12:36.075 "num_base_bdevs": 4, 00:12:36.075 "num_base_bdevs_discovered": 3, 00:12:36.075 "num_base_bdevs_operational": 3, 00:12:36.075 "process": { 00:12:36.075 "type": "rebuild", 00:12:36.075 "target": "spare", 00:12:36.075 "progress": { 00:12:36.075 "blocks": 20480, 00:12:36.075 "percent": 32 00:12:36.075 } 00:12:36.075 }, 00:12:36.075 "base_bdevs_list": [ 00:12:36.075 { 00:12:36.075 "name": "spare", 00:12:36.075 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:36.075 "is_configured": true, 00:12:36.075 "data_offset": 2048, 00:12:36.075 "data_size": 63488 00:12:36.075 }, 00:12:36.075 { 00:12:36.075 "name": null, 00:12:36.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.075 "is_configured": false, 00:12:36.075 "data_offset": 2048, 00:12:36.075 "data_size": 63488 00:12:36.075 }, 00:12:36.075 { 00:12:36.075 "name": "BaseBdev3", 00:12:36.075 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:36.075 "is_configured": true, 00:12:36.075 "data_offset": 2048, 00:12:36.075 "data_size": 63488 00:12:36.075 }, 00:12:36.075 { 00:12:36.075 "name": "BaseBdev4", 00:12:36.075 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:36.075 "is_configured": true, 00:12:36.075 "data_offset": 2048, 00:12:36.075 "data_size": 63488 00:12:36.075 } 00:12:36.075 ] 00:12:36.075 }' 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.075 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.075 [2024-11-21 04:58:52.735000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.336 [2024-11-21 04:58:52.818763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.336 [2024-11-21 04:58:52.818971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.336 [2024-11-21 04:58:52.818990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.336 [2024-11-21 04:58:52.819000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.336 "name": "raid_bdev1", 00:12:36.336 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:36.336 "strip_size_kb": 0, 00:12:36.336 "state": "online", 00:12:36.336 "raid_level": "raid1", 00:12:36.336 "superblock": true, 00:12:36.336 "num_base_bdevs": 4, 00:12:36.336 "num_base_bdevs_discovered": 2, 00:12:36.336 "num_base_bdevs_operational": 2, 00:12:36.336 "base_bdevs_list": [ 00:12:36.336 { 00:12:36.336 "name": null, 00:12:36.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.336 "is_configured": false, 00:12:36.336 "data_offset": 0, 00:12:36.336 "data_size": 63488 00:12:36.336 }, 00:12:36.336 { 00:12:36.336 "name": null, 00:12:36.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.336 "is_configured": false, 00:12:36.336 "data_offset": 2048, 00:12:36.336 "data_size": 63488 00:12:36.336 }, 00:12:36.336 { 00:12:36.336 "name": "BaseBdev3", 00:12:36.336 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:36.336 "is_configured": true, 00:12:36.336 "data_offset": 2048, 00:12:36.336 "data_size": 63488 00:12:36.336 }, 00:12:36.336 { 00:12:36.336 "name": "BaseBdev4", 00:12:36.336 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:36.336 "is_configured": true, 00:12:36.336 "data_offset": 2048, 00:12:36.336 "data_size": 63488 00:12:36.336 } 00:12:36.336 ] 00:12:36.336 }' 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.336 04:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 04:58:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.596 04:58:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.596 04:58:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 [2024-11-21 04:58:53.250810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.596 [2024-11-21 04:58:53.250939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.596 [2024-11-21 04:58:53.250978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:36.597 [2024-11-21 04:58:53.251007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.597 [2024-11-21 04:58:53.251540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.597 [2024-11-21 04:58:53.251604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.597 [2024-11-21 04:58:53.251755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:36.597 [2024-11-21 04:58:53.251806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:36.597 [2024-11-21 04:58:53.251857] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:36.597 [2024-11-21 04:58:53.251919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.597 [2024-11-21 04:58:53.256038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:36.597 spare 00:12:36.597 04:58:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.597 04:58:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:36.597 [2024-11-21 04:58:53.257953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.537 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.798 "name": "raid_bdev1", 00:12:37.798 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:37.798 "strip_size_kb": 0, 00:12:37.798 "state": "online", 00:12:37.798 "raid_level": "raid1", 00:12:37.798 "superblock": true, 00:12:37.798 "num_base_bdevs": 4, 00:12:37.798 "num_base_bdevs_discovered": 3, 00:12:37.798 "num_base_bdevs_operational": 3, 00:12:37.798 "process": { 00:12:37.798 "type": "rebuild", 00:12:37.798 "target": "spare", 00:12:37.798 "progress": { 00:12:37.798 "blocks": 20480, 00:12:37.798 "percent": 32 00:12:37.798 } 00:12:37.798 }, 00:12:37.798 "base_bdevs_list": [ 00:12:37.798 { 00:12:37.798 "name": "spare", 00:12:37.798 "uuid": "a4dad082-9fb3-5763-98d0-1720ac6bf202", 00:12:37.798 "is_configured": true, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": null, 00:12:37.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.798 "is_configured": false, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": "BaseBdev3", 00:12:37.798 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:37.798 "is_configured": true, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": "BaseBdev4", 00:12:37.798 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:37.798 "is_configured": true, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 } 00:12:37.798 ] 00:12:37.798 }' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.798 [2024-11-21 04:58:54.374674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.798 [2024-11-21 04:58:54.463038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.798 [2024-11-21 04:58:54.463164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.798 [2024-11-21 04:58:54.463184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.798 [2024-11-21 04:58:54.463191] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.798 "name": "raid_bdev1", 00:12:37.798 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:37.798 "strip_size_kb": 0, 00:12:37.798 "state": "online", 00:12:37.798 "raid_level": "raid1", 00:12:37.798 "superblock": true, 00:12:37.798 "num_base_bdevs": 4, 00:12:37.798 "num_base_bdevs_discovered": 2, 00:12:37.798 "num_base_bdevs_operational": 2, 00:12:37.798 "base_bdevs_list": [ 00:12:37.798 { 00:12:37.798 "name": null, 00:12:37.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.798 "is_configured": false, 00:12:37.798 "data_offset": 0, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": null, 00:12:37.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.798 "is_configured": false, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": "BaseBdev3", 00:12:37.798 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:37.798 "is_configured": true, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 }, 00:12:37.798 { 00:12:37.798 "name": "BaseBdev4", 00:12:37.798 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:37.798 "is_configured": true, 00:12:37.798 "data_offset": 2048, 00:12:37.798 "data_size": 63488 00:12:37.798 } 00:12:37.798 ] 00:12:37.798 }' 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.798 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.369 "name": "raid_bdev1", 00:12:38.369 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:38.369 "strip_size_kb": 0, 00:12:38.369 "state": "online", 00:12:38.369 "raid_level": "raid1", 00:12:38.369 "superblock": true, 00:12:38.369 "num_base_bdevs": 4, 00:12:38.369 "num_base_bdevs_discovered": 2, 00:12:38.369 "num_base_bdevs_operational": 2, 00:12:38.369 "base_bdevs_list": [ 00:12:38.369 { 00:12:38.369 "name": null, 00:12:38.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.369 "is_configured": false, 00:12:38.369 "data_offset": 0, 00:12:38.369 "data_size": 63488 00:12:38.369 }, 00:12:38.369 { 00:12:38.369 "name": null, 00:12:38.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.369 "is_configured": false, 00:12:38.369 "data_offset": 2048, 00:12:38.369 "data_size": 63488 00:12:38.369 }, 00:12:38.369 { 00:12:38.369 "name": "BaseBdev3", 00:12:38.369 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:38.369 "is_configured": true, 00:12:38.369 "data_offset": 2048, 00:12:38.369 "data_size": 63488 00:12:38.369 }, 00:12:38.369 { 00:12:38.369 "name": "BaseBdev4", 00:12:38.369 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:38.369 "is_configured": true, 00:12:38.369 "data_offset": 2048, 00:12:38.369 "data_size": 63488 00:12:38.369 } 00:12:38.369 ] 00:12:38.369 }' 00:12:38.369 04:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.369 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.629 [2024-11-21 04:58:55.102769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.629 [2024-11-21 04:58:55.102830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.629 [2024-11-21 04:58:55.102853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:38.629 [2024-11-21 04:58:55.102862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.629 [2024-11-21 04:58:55.103346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.629 [2024-11-21 04:58:55.103373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.629 [2024-11-21 04:58:55.103469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:38.629 [2024-11-21 04:58:55.103484] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.629 [2024-11-21 04:58:55.103494] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.629 [2024-11-21 04:58:55.103504] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:38.629 BaseBdev1 00:12:38.629 04:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.629 04:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.567 "name": "raid_bdev1", 00:12:39.567 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:39.567 "strip_size_kb": 0, 00:12:39.567 "state": "online", 00:12:39.567 "raid_level": "raid1", 00:12:39.567 "superblock": true, 00:12:39.567 "num_base_bdevs": 4, 00:12:39.567 "num_base_bdevs_discovered": 2, 00:12:39.567 "num_base_bdevs_operational": 2, 00:12:39.567 "base_bdevs_list": [ 00:12:39.567 { 00:12:39.567 "name": null, 00:12:39.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.567 "is_configured": false, 00:12:39.567 "data_offset": 0, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": null, 00:12:39.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.567 "is_configured": false, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": "BaseBdev3", 00:12:39.567 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 }, 00:12:39.567 { 00:12:39.567 "name": "BaseBdev4", 00:12:39.567 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:39.567 "is_configured": true, 00:12:39.567 "data_offset": 2048, 00:12:39.567 "data_size": 63488 00:12:39.567 } 00:12:39.567 ] 00:12:39.567 }' 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.567 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.138 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.138 "name": "raid_bdev1", 00:12:40.138 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:40.138 "strip_size_kb": 0, 00:12:40.138 "state": "online", 00:12:40.138 "raid_level": "raid1", 00:12:40.138 "superblock": true, 00:12:40.138 "num_base_bdevs": 4, 00:12:40.138 "num_base_bdevs_discovered": 2, 00:12:40.138 "num_base_bdevs_operational": 2, 00:12:40.138 "base_bdevs_list": [ 00:12:40.138 { 00:12:40.138 "name": null, 00:12:40.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.138 "is_configured": false, 00:12:40.138 "data_offset": 0, 00:12:40.138 "data_size": 63488 00:12:40.138 }, 00:12:40.138 { 00:12:40.138 "name": null, 00:12:40.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.138 "is_configured": false, 00:12:40.138 "data_offset": 2048, 00:12:40.138 "data_size": 63488 00:12:40.138 }, 00:12:40.138 { 00:12:40.138 "name": "BaseBdev3", 00:12:40.139 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:40.139 "is_configured": true, 00:12:40.139 "data_offset": 2048, 00:12:40.139 "data_size": 63488 00:12:40.139 }, 00:12:40.139 { 00:12:40.139 "name": "BaseBdev4", 00:12:40.139 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:40.139 "is_configured": true, 00:12:40.139 "data_offset": 2048, 00:12:40.139 "data_size": 63488 00:12:40.139 } 00:12:40.139 ] 00:12:40.139 }' 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.139 [2024-11-21 04:58:56.684182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.139 [2024-11-21 04:58:56.684346] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.139 [2024-11-21 04:58:56.684360] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.139 request: 00:12:40.139 { 00:12:40.139 "base_bdev": "BaseBdev1", 00:12:40.139 "raid_bdev": "raid_bdev1", 00:12:40.139 "method": "bdev_raid_add_base_bdev", 00:12:40.139 "req_id": 1 00:12:40.139 } 00:12:40.139 Got JSON-RPC error response 00:12:40.139 response: 00:12:40.139 { 00:12:40.139 "code": -22, 00:12:40.139 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:40.139 } 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.139 04:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.081 "name": "raid_bdev1", 00:12:41.081 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:41.081 "strip_size_kb": 0, 00:12:41.081 "state": "online", 00:12:41.081 "raid_level": "raid1", 00:12:41.081 "superblock": true, 00:12:41.081 "num_base_bdevs": 4, 00:12:41.081 "num_base_bdevs_discovered": 2, 00:12:41.081 "num_base_bdevs_operational": 2, 00:12:41.081 "base_bdevs_list": [ 00:12:41.081 { 00:12:41.081 "name": null, 00:12:41.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.081 "is_configured": false, 00:12:41.081 "data_offset": 0, 00:12:41.081 "data_size": 63488 00:12:41.081 }, 00:12:41.081 { 00:12:41.081 "name": null, 00:12:41.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.081 "is_configured": false, 00:12:41.081 "data_offset": 2048, 00:12:41.081 "data_size": 63488 00:12:41.081 }, 00:12:41.081 { 00:12:41.081 "name": "BaseBdev3", 00:12:41.081 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:41.081 "is_configured": true, 00:12:41.081 "data_offset": 2048, 00:12:41.081 "data_size": 63488 00:12:41.081 }, 00:12:41.081 { 00:12:41.081 "name": "BaseBdev4", 00:12:41.081 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:41.081 "is_configured": true, 00:12:41.081 "data_offset": 2048, 00:12:41.081 "data_size": 63488 00:12:41.081 } 00:12:41.081 ] 00:12:41.081 }' 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.081 04:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.651 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.651 "name": "raid_bdev1", 00:12:41.651 "uuid": "baef96a1-1467-452a-a442-d8e1ef5ee7ae", 00:12:41.651 "strip_size_kb": 0, 00:12:41.651 "state": "online", 00:12:41.651 "raid_level": "raid1", 00:12:41.651 "superblock": true, 00:12:41.651 "num_base_bdevs": 4, 00:12:41.651 "num_base_bdevs_discovered": 2, 00:12:41.651 "num_base_bdevs_operational": 2, 00:12:41.651 "base_bdevs_list": [ 00:12:41.651 { 00:12:41.651 "name": null, 00:12:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.651 "is_configured": false, 00:12:41.651 "data_offset": 0, 00:12:41.651 "data_size": 63488 00:12:41.651 }, 00:12:41.651 { 00:12:41.651 "name": null, 00:12:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.651 "is_configured": false, 00:12:41.651 "data_offset": 2048, 00:12:41.652 "data_size": 63488 00:12:41.652 }, 00:12:41.652 { 00:12:41.652 "name": "BaseBdev3", 00:12:41.652 "uuid": "5fc3d8bf-e469-54b6-962e-5cf3e2fef5fa", 00:12:41.652 "is_configured": true, 00:12:41.652 "data_offset": 2048, 00:12:41.652 "data_size": 63488 00:12:41.652 }, 00:12:41.652 { 00:12:41.652 "name": "BaseBdev4", 00:12:41.652 "uuid": "9a1b1fe9-0ead-5493-b005-70cc8731c072", 00:12:41.652 "is_configured": true, 00:12:41.652 "data_offset": 2048, 00:12:41.652 "data_size": 63488 00:12:41.652 } 00:12:41.652 ] 00:12:41.652 }' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88758 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88758 ']' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88758 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88758 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.652 killing process with pid 88758 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88758' 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88758 00:12:41.652 Received shutdown signal, test time was about 60.000000 seconds 00:12:41.652 00:12:41.652 Latency(us) 00:12:41.652 [2024-11-21T04:58:58.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.652 [2024-11-21T04:58:58.387Z] =================================================================================================================== 00:12:41.652 [2024-11-21T04:58:58.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:41.652 [2024-11-21 04:58:58.322821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.652 [2024-11-21 04:58:58.322970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.652 [2024-11-21 04:58:58.323044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.652 [2024-11-21 04:58:58.323056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:41.652 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88758 00:12:41.652 [2024-11-21 04:58:58.374108] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.921 04:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.921 00:12:41.921 real 0m22.983s 00:12:41.922 user 0m28.255s 00:12:41.922 sys 0m3.535s 00:12:41.922 ************************************ 00:12:41.922 END TEST raid_rebuild_test_sb 00:12:41.922 ************************************ 00:12:41.922 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.922 04:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.922 04:58:58 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:41.922 04:58:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:41.922 04:58:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.922 04:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.196 ************************************ 00:12:42.196 START TEST raid_rebuild_test_io 00:12:42.196 ************************************ 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.196 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89489 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89489 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89489 ']' 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.197 04:58:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.197 Zero copy mechanism will not be used. 00:12:42.197 [2024-11-21 04:58:58.747959] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:12:42.197 [2024-11-21 04:58:58.748079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89489 ] 00:12:42.197 [2024-11-21 04:58:58.919421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.457 [2024-11-21 04:58:58.945466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.457 [2024-11-21 04:58:58.987773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.457 [2024-11-21 04:58:58.987810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 BaseBdev1_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-11-21 04:58:59.594459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.026 [2024-11-21 04:58:59.594552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.026 [2024-11-21 04:58:59.594582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.026 [2024-11-21 04:58:59.594594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.026 [2024-11-21 04:58:59.597035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.026 [2024-11-21 04:58:59.597079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.026 BaseBdev1 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 BaseBdev2_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-11-21 04:58:59.623504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:43.026 [2024-11-21 04:58:59.623632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.026 [2024-11-21 04:58:59.623661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.026 [2024-11-21 04:58:59.623672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.026 [2024-11-21 04:58:59.625927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.026 [2024-11-21 04:58:59.625962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.026 BaseBdev2 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 BaseBdev3_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-11-21 04:58:59.652608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:43.026 [2024-11-21 04:58:59.652664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.026 [2024-11-21 04:58:59.652686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:43.026 [2024-11-21 04:58:59.652696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.026 [2024-11-21 04:58:59.654865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.026 [2024-11-21 04:58:59.654960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:43.026 BaseBdev3 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 BaseBdev4_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-11-21 04:58:59.693070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:43.026 [2024-11-21 04:58:59.693183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.026 [2024-11-21 04:58:59.693214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:43.026 [2024-11-21 04:58:59.693224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.026 [2024-11-21 04:58:59.695411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.026 [2024-11-21 04:58:59.695447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:43.026 BaseBdev4 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 spare_malloc 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 spare_delay 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.026 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-11-21 04:58:59.733812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.026 [2024-11-21 04:58:59.733869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.026 [2024-11-21 04:58:59.733894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:43.026 [2024-11-21 04:58:59.733903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.026 [2024-11-21 04:58:59.736034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.026 [2024-11-21 04:58:59.736130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.027 spare 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.027 [2024-11-21 04:58:59.745855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.027 [2024-11-21 04:58:59.747701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.027 [2024-11-21 04:58:59.747837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.027 [2024-11-21 04:58:59.747890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:43.027 [2024-11-21 04:58:59.747980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:43.027 [2024-11-21 04:58:59.747990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.027 [2024-11-21 04:58:59.748290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:43.027 [2024-11-21 04:58:59.748446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:43.027 [2024-11-21 04:58:59.748460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:43.027 [2024-11-21 04:58:59.748582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.027 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.288 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.288 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.288 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.288 "name": "raid_bdev1", 00:12:43.288 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:43.288 "strip_size_kb": 0, 00:12:43.288 "state": "online", 00:12:43.288 "raid_level": "raid1", 00:12:43.288 "superblock": false, 00:12:43.288 "num_base_bdevs": 4, 00:12:43.288 "num_base_bdevs_discovered": 4, 00:12:43.288 "num_base_bdevs_operational": 4, 00:12:43.288 "base_bdevs_list": [ 00:12:43.288 { 00:12:43.288 "name": "BaseBdev1", 00:12:43.288 "uuid": "a3a21ddd-8085-5b2f-ae19-4c98391a506c", 00:12:43.288 "is_configured": true, 00:12:43.288 "data_offset": 0, 00:12:43.288 "data_size": 65536 00:12:43.288 }, 00:12:43.288 { 00:12:43.288 "name": "BaseBdev2", 00:12:43.288 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:43.288 "is_configured": true, 00:12:43.288 "data_offset": 0, 00:12:43.288 "data_size": 65536 00:12:43.288 }, 00:12:43.288 { 00:12:43.288 "name": "BaseBdev3", 00:12:43.288 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:43.288 "is_configured": true, 00:12:43.288 "data_offset": 0, 00:12:43.288 "data_size": 65536 00:12:43.288 }, 00:12:43.288 { 00:12:43.288 "name": "BaseBdev4", 00:12:43.288 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:43.288 "is_configured": true, 00:12:43.288 "data_offset": 0, 00:12:43.288 "data_size": 65536 00:12:43.288 } 00:12:43.288 ] 00:12:43.288 }' 00:12:43.288 04:58:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.288 04:58:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.548 [2024-11-21 04:59:00.209402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.548 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.808 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 [2024-11-21 04:59:00.296902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.809 "name": "raid_bdev1", 00:12:43.809 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:43.809 "strip_size_kb": 0, 00:12:43.809 "state": "online", 00:12:43.809 "raid_level": "raid1", 00:12:43.809 "superblock": false, 00:12:43.809 "num_base_bdevs": 4, 00:12:43.809 "num_base_bdevs_discovered": 3, 00:12:43.809 "num_base_bdevs_operational": 3, 00:12:43.809 "base_bdevs_list": [ 00:12:43.809 { 00:12:43.809 "name": null, 00:12:43.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.809 "is_configured": false, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 }, 00:12:43.809 { 00:12:43.809 "name": "BaseBdev2", 00:12:43.809 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 }, 00:12:43.809 { 00:12:43.809 "name": "BaseBdev3", 00:12:43.809 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 }, 00:12:43.809 { 00:12:43.809 "name": "BaseBdev4", 00:12:43.809 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 } 00:12:43.809 ] 00:12:43.809 }' 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.809 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 [2024-11-21 04:59:00.378674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:43.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.809 Zero copy mechanism will not be used. 00:12:43.809 Running I/O for 60 seconds... 00:12:44.069 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.069 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.069 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.069 [2024-11-21 04:59:00.780979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.329 04:59:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.329 04:59:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:44.329 [2024-11-21 04:59:00.818242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:44.329 [2024-11-21 04:59:00.820335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.329 [2024-11-21 04:59:00.929403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.329 [2024-11-21 04:59:00.930948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.590 [2024-11-21 04:59:01.147852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.590 [2024-11-21 04:59:01.148729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.849 191.00 IOPS, 573.00 MiB/s [2024-11-21T04:59:01.584Z] [2024-11-21 04:59:01.493520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:44.849 [2024-11-21 04:59:01.495071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:45.109 [2024-11-21 04:59:01.704800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.109 [2024-11-21 04:59:01.705254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.109 04:59:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.370 "name": "raid_bdev1", 00:12:45.370 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:45.370 "strip_size_kb": 0, 00:12:45.370 "state": "online", 00:12:45.370 "raid_level": "raid1", 00:12:45.370 "superblock": false, 00:12:45.370 "num_base_bdevs": 4, 00:12:45.370 "num_base_bdevs_discovered": 4, 00:12:45.370 "num_base_bdevs_operational": 4, 00:12:45.370 "process": { 00:12:45.370 "type": "rebuild", 00:12:45.370 "target": "spare", 00:12:45.370 "progress": { 00:12:45.370 "blocks": 10240, 00:12:45.370 "percent": 15 00:12:45.370 } 00:12:45.370 }, 00:12:45.370 "base_bdevs_list": [ 00:12:45.370 { 00:12:45.370 "name": "spare", 00:12:45.370 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:45.370 "is_configured": true, 00:12:45.370 "data_offset": 0, 00:12:45.370 "data_size": 65536 00:12:45.370 }, 00:12:45.370 { 00:12:45.370 "name": "BaseBdev2", 00:12:45.370 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:45.370 "is_configured": true, 00:12:45.370 "data_offset": 0, 00:12:45.370 "data_size": 65536 00:12:45.370 }, 00:12:45.370 { 00:12:45.370 "name": "BaseBdev3", 00:12:45.370 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:45.370 "is_configured": true, 00:12:45.370 "data_offset": 0, 00:12:45.370 "data_size": 65536 00:12:45.370 }, 00:12:45.370 { 00:12:45.370 "name": "BaseBdev4", 00:12:45.370 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:45.370 "is_configured": true, 00:12:45.370 "data_offset": 0, 00:12:45.370 "data_size": 65536 00:12:45.370 } 00:12:45.370 ] 00:12:45.370 }' 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.370 04:59:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.370 [2024-11-21 04:59:01.975572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.370 [2024-11-21 04:59:02.068649] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.370 [2024-11-21 04:59:02.085135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.370 [2024-11-21 04:59:02.085217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.370 [2024-11-21 04:59:02.085232] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.630 [2024-11-21 04:59:02.103509] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.630 "name": "raid_bdev1", 00:12:45.630 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:45.630 "strip_size_kb": 0, 00:12:45.630 "state": "online", 00:12:45.630 "raid_level": "raid1", 00:12:45.630 "superblock": false, 00:12:45.630 "num_base_bdevs": 4, 00:12:45.630 "num_base_bdevs_discovered": 3, 00:12:45.630 "num_base_bdevs_operational": 3, 00:12:45.630 "base_bdevs_list": [ 00:12:45.630 { 00:12:45.630 "name": null, 00:12:45.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.630 "is_configured": false, 00:12:45.630 "data_offset": 0, 00:12:45.630 "data_size": 65536 00:12:45.630 }, 00:12:45.630 { 00:12:45.630 "name": "BaseBdev2", 00:12:45.630 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:45.630 "is_configured": true, 00:12:45.630 "data_offset": 0, 00:12:45.630 "data_size": 65536 00:12:45.630 }, 00:12:45.630 { 00:12:45.630 "name": "BaseBdev3", 00:12:45.630 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:45.630 "is_configured": true, 00:12:45.630 "data_offset": 0, 00:12:45.630 "data_size": 65536 00:12:45.630 }, 00:12:45.630 { 00:12:45.630 "name": "BaseBdev4", 00:12:45.630 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:45.630 "is_configured": true, 00:12:45.630 "data_offset": 0, 00:12:45.630 "data_size": 65536 00:12:45.630 } 00:12:45.630 ] 00:12:45.630 }' 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.630 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 147.50 IOPS, 442.50 MiB/s [2024-11-21T04:59:02.626Z] 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.891 "name": "raid_bdev1", 00:12:45.891 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:45.891 "strip_size_kb": 0, 00:12:45.891 "state": "online", 00:12:45.891 "raid_level": "raid1", 00:12:45.891 "superblock": false, 00:12:45.891 "num_base_bdevs": 4, 00:12:45.891 "num_base_bdevs_discovered": 3, 00:12:45.891 "num_base_bdevs_operational": 3, 00:12:45.891 "base_bdevs_list": [ 00:12:45.891 { 00:12:45.891 "name": null, 00:12:45.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.891 "is_configured": false, 00:12:45.891 "data_offset": 0, 00:12:45.891 "data_size": 65536 00:12:45.891 }, 00:12:45.891 { 00:12:45.891 "name": "BaseBdev2", 00:12:45.891 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:45.891 "is_configured": true, 00:12:45.891 "data_offset": 0, 00:12:45.891 "data_size": 65536 00:12:45.891 }, 00:12:45.891 { 00:12:45.891 "name": "BaseBdev3", 00:12:45.891 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:45.891 "is_configured": true, 00:12:45.891 "data_offset": 0, 00:12:45.891 "data_size": 65536 00:12:45.891 }, 00:12:45.891 { 00:12:45.891 "name": "BaseBdev4", 00:12:45.891 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:45.891 "is_configured": true, 00:12:45.891 "data_offset": 0, 00:12:45.891 "data_size": 65536 00:12:45.891 } 00:12:45.891 ] 00:12:45.891 }' 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.891 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.151 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.151 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.151 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.151 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 [2024-11-21 04:59:02.656889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.152 04:59:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.152 04:59:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.152 [2024-11-21 04:59:02.704634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:46.152 [2024-11-21 04:59:02.706603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.152 [2024-11-21 04:59:02.826830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.152 [2024-11-21 04:59:02.828292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.412 [2024-11-21 04:59:03.055699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.412 [2024-11-21 04:59:03.056575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.672 [2024-11-21 04:59:03.382838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:46.672 [2024-11-21 04:59:03.384387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:46.931 138.33 IOPS, 415.00 MiB/s [2024-11-21T04:59:03.666Z] [2024-11-21 04:59:03.594317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.931 [2024-11-21 04:59:03.594812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.191 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.191 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.191 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.191 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.192 "name": "raid_bdev1", 00:12:47.192 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:47.192 "strip_size_kb": 0, 00:12:47.192 "state": "online", 00:12:47.192 "raid_level": "raid1", 00:12:47.192 "superblock": false, 00:12:47.192 "num_base_bdevs": 4, 00:12:47.192 "num_base_bdevs_discovered": 4, 00:12:47.192 "num_base_bdevs_operational": 4, 00:12:47.192 "process": { 00:12:47.192 "type": "rebuild", 00:12:47.192 "target": "spare", 00:12:47.192 "progress": { 00:12:47.192 "blocks": 10240, 00:12:47.192 "percent": 15 00:12:47.192 } 00:12:47.192 }, 00:12:47.192 "base_bdevs_list": [ 00:12:47.192 { 00:12:47.192 "name": "spare", 00:12:47.192 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:47.192 "is_configured": true, 00:12:47.192 "data_offset": 0, 00:12:47.192 "data_size": 65536 00:12:47.192 }, 00:12:47.192 { 00:12:47.192 "name": "BaseBdev2", 00:12:47.192 "uuid": "c750a935-8612-5d95-bb91-8416a342b61c", 00:12:47.192 "is_configured": true, 00:12:47.192 "data_offset": 0, 00:12:47.192 "data_size": 65536 00:12:47.192 }, 00:12:47.192 { 00:12:47.192 "name": "BaseBdev3", 00:12:47.192 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:47.192 "is_configured": true, 00:12:47.192 "data_offset": 0, 00:12:47.192 "data_size": 65536 00:12:47.192 }, 00:12:47.192 { 00:12:47.192 "name": "BaseBdev4", 00:12:47.192 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:47.192 "is_configured": true, 00:12:47.192 "data_offset": 0, 00:12:47.192 "data_size": 65536 00:12:47.192 } 00:12:47.192 ] 00:12:47.192 }' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.192 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.192 [2024-11-21 04:59:03.860041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.452 [2024-11-21 04:59:03.946249] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:47.452 [2024-11-21 04:59:03.946314] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:47.452 [2024-11-21 04:59:03.947067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.452 04:59:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.452 "name": "raid_bdev1", 00:12:47.452 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:47.452 "strip_size_kb": 0, 00:12:47.452 "state": "online", 00:12:47.452 "raid_level": "raid1", 00:12:47.452 "superblock": false, 00:12:47.452 "num_base_bdevs": 4, 00:12:47.452 "num_base_bdevs_discovered": 3, 00:12:47.452 "num_base_bdevs_operational": 3, 00:12:47.452 "process": { 00:12:47.452 "type": "rebuild", 00:12:47.452 "target": "spare", 00:12:47.452 "progress": { 00:12:47.452 "blocks": 14336, 00:12:47.452 "percent": 21 00:12:47.452 } 00:12:47.452 }, 00:12:47.452 "base_bdevs_list": [ 00:12:47.452 { 00:12:47.452 "name": "spare", 00:12:47.452 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:47.452 "is_configured": true, 00:12:47.452 "data_offset": 0, 00:12:47.452 "data_size": 65536 00:12:47.452 }, 00:12:47.452 { 00:12:47.452 "name": null, 00:12:47.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.452 "is_configured": false, 00:12:47.452 "data_offset": 0, 00:12:47.452 "data_size": 65536 00:12:47.452 }, 00:12:47.453 { 00:12:47.453 "name": "BaseBdev3", 00:12:47.453 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:47.453 "is_configured": true, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 }, 00:12:47.453 { 00:12:47.453 "name": "BaseBdev4", 00:12:47.453 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:47.453 "is_configured": true, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 } 00:12:47.453 ] 00:12:47.453 }' 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.453 "name": "raid_bdev1", 00:12:47.453 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:47.453 "strip_size_kb": 0, 00:12:47.453 "state": "online", 00:12:47.453 "raid_level": "raid1", 00:12:47.453 "superblock": false, 00:12:47.453 "num_base_bdevs": 4, 00:12:47.453 "num_base_bdevs_discovered": 3, 00:12:47.453 "num_base_bdevs_operational": 3, 00:12:47.453 "process": { 00:12:47.453 "type": "rebuild", 00:12:47.453 "target": "spare", 00:12:47.453 "progress": { 00:12:47.453 "blocks": 14336, 00:12:47.453 "percent": 21 00:12:47.453 } 00:12:47.453 }, 00:12:47.453 "base_bdevs_list": [ 00:12:47.453 { 00:12:47.453 "name": "spare", 00:12:47.453 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:47.453 "is_configured": true, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 }, 00:12:47.453 { 00:12:47.453 "name": null, 00:12:47.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.453 "is_configured": false, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 }, 00:12:47.453 { 00:12:47.453 "name": "BaseBdev3", 00:12:47.453 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:47.453 "is_configured": true, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 }, 00:12:47.453 { 00:12:47.453 "name": "BaseBdev4", 00:12:47.453 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:47.453 "is_configured": true, 00:12:47.453 "data_offset": 0, 00:12:47.453 "data_size": 65536 00:12:47.453 } 00:12:47.453 ] 00:12:47.453 }' 00:12:47.453 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.453 [2024-11-21 04:59:04.163857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:47.453 [2024-11-21 04:59:04.164434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:47.713 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.713 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.713 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.713 04:59:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.282 117.75 IOPS, 353.25 MiB/s [2024-11-21T04:59:05.017Z] [2024-11-21 04:59:04.849026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.541 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.541 "name": "raid_bdev1", 00:12:48.541 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:48.541 "strip_size_kb": 0, 00:12:48.541 "state": "online", 00:12:48.541 "raid_level": "raid1", 00:12:48.541 "superblock": false, 00:12:48.541 "num_base_bdevs": 4, 00:12:48.541 "num_base_bdevs_discovered": 3, 00:12:48.541 "num_base_bdevs_operational": 3, 00:12:48.541 "process": { 00:12:48.541 "type": "rebuild", 00:12:48.541 "target": "spare", 00:12:48.541 "progress": { 00:12:48.541 "blocks": 30720, 00:12:48.541 "percent": 46 00:12:48.541 } 00:12:48.541 }, 00:12:48.541 "base_bdevs_list": [ 00:12:48.541 { 00:12:48.541 "name": "spare", 00:12:48.542 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:48.542 "is_configured": true, 00:12:48.542 "data_offset": 0, 00:12:48.542 "data_size": 65536 00:12:48.542 }, 00:12:48.542 { 00:12:48.542 "name": null, 00:12:48.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.542 "is_configured": false, 00:12:48.542 "data_offset": 0, 00:12:48.542 "data_size": 65536 00:12:48.542 }, 00:12:48.542 { 00:12:48.542 "name": "BaseBdev3", 00:12:48.542 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:48.542 "is_configured": true, 00:12:48.542 "data_offset": 0, 00:12:48.542 "data_size": 65536 00:12:48.542 }, 00:12:48.542 { 00:12:48.542 "name": "BaseBdev4", 00:12:48.542 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:48.542 "is_configured": true, 00:12:48.542 "data_offset": 0, 00:12:48.542 "data_size": 65536 00:12:48.542 } 00:12:48.542 ] 00:12:48.542 }' 00:12:48.542 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.807 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.807 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.807 [2024-11-21 04:59:05.342112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:48.807 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.807 04:59:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.384 104.80 IOPS, 314.40 MiB/s [2024-11-21T04:59:06.119Z] [2024-11-21 04:59:05.930555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.644 04:59:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.904 95.50 IOPS, 286.50 MiB/s [2024-11-21T04:59:06.639Z] 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.904 "name": "raid_bdev1", 00:12:49.904 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:49.904 "strip_size_kb": 0, 00:12:49.904 "state": "online", 00:12:49.904 "raid_level": "raid1", 00:12:49.904 "superblock": false, 00:12:49.904 "num_base_bdevs": 4, 00:12:49.904 "num_base_bdevs_discovered": 3, 00:12:49.904 "num_base_bdevs_operational": 3, 00:12:49.904 "process": { 00:12:49.904 "type": "rebuild", 00:12:49.904 "target": "spare", 00:12:49.904 "progress": { 00:12:49.904 "blocks": 47104, 00:12:49.904 "percent": 71 00:12:49.904 } 00:12:49.904 }, 00:12:49.904 "base_bdevs_list": [ 00:12:49.904 { 00:12:49.904 "name": "spare", 00:12:49.904 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:49.904 "is_configured": true, 00:12:49.904 "data_offset": 0, 00:12:49.904 "data_size": 65536 00:12:49.904 }, 00:12:49.904 { 00:12:49.904 "name": null, 00:12:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.904 "is_configured": false, 00:12:49.904 "data_offset": 0, 00:12:49.904 "data_size": 65536 00:12:49.904 }, 00:12:49.904 { 00:12:49.904 "name": "BaseBdev3", 00:12:49.904 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:49.904 "is_configured": true, 00:12:49.904 "data_offset": 0, 00:12:49.904 "data_size": 65536 00:12:49.904 }, 00:12:49.904 { 00:12:49.904 "name": "BaseBdev4", 00:12:49.904 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:49.904 "is_configured": true, 00:12:49.904 "data_offset": 0, 00:12:49.904 "data_size": 65536 00:12:49.904 } 00:12:49.904 ] 00:12:49.904 }' 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.904 04:59:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.164 [2024-11-21 04:59:06.792508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:50.733 [2024-11-21 04:59:07.229670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.733 [2024-11-21 04:59:07.329508] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.733 [2024-11-21 04:59:07.331819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.993 87.00 IOPS, 261.00 MiB/s [2024-11-21T04:59:07.728Z] 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.993 "name": "raid_bdev1", 00:12:50.993 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:50.993 "strip_size_kb": 0, 00:12:50.993 "state": "online", 00:12:50.993 "raid_level": "raid1", 00:12:50.993 "superblock": false, 00:12:50.993 "num_base_bdevs": 4, 00:12:50.993 "num_base_bdevs_discovered": 3, 00:12:50.993 "num_base_bdevs_operational": 3, 00:12:50.993 "base_bdevs_list": [ 00:12:50.993 { 00:12:50.993 "name": "spare", 00:12:50.993 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": null, 00:12:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.993 "is_configured": false, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": "BaseBdev3", 00:12:50.993 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": "BaseBdev4", 00:12:50.993 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 } 00:12:50.993 ] 00:12:50.993 }' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.993 "name": "raid_bdev1", 00:12:50.993 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:50.993 "strip_size_kb": 0, 00:12:50.993 "state": "online", 00:12:50.993 "raid_level": "raid1", 00:12:50.993 "superblock": false, 00:12:50.993 "num_base_bdevs": 4, 00:12:50.993 "num_base_bdevs_discovered": 3, 00:12:50.993 "num_base_bdevs_operational": 3, 00:12:50.993 "base_bdevs_list": [ 00:12:50.993 { 00:12:50.993 "name": "spare", 00:12:50.993 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": null, 00:12:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.993 "is_configured": false, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": "BaseBdev3", 00:12:50.993 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 }, 00:12:50.993 { 00:12:50.993 "name": "BaseBdev4", 00:12:50.993 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:50.993 "is_configured": true, 00:12:50.993 "data_offset": 0, 00:12:50.993 "data_size": 65536 00:12:50.993 } 00:12:50.993 ] 00:12:50.993 }' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.993 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.253 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.253 "name": "raid_bdev1", 00:12:51.253 "uuid": "78623786-e1ad-4808-802c-a3b389370916", 00:12:51.253 "strip_size_kb": 0, 00:12:51.253 "state": "online", 00:12:51.253 "raid_level": "raid1", 00:12:51.253 "superblock": false, 00:12:51.253 "num_base_bdevs": 4, 00:12:51.253 "num_base_bdevs_discovered": 3, 00:12:51.253 "num_base_bdevs_operational": 3, 00:12:51.253 "base_bdevs_list": [ 00:12:51.253 { 00:12:51.253 "name": "spare", 00:12:51.253 "uuid": "28fc1d20-ee14-53d5-86b2-a2c94ba1213e", 00:12:51.253 "is_configured": true, 00:12:51.253 "data_offset": 0, 00:12:51.253 "data_size": 65536 00:12:51.253 }, 00:12:51.253 { 00:12:51.253 "name": null, 00:12:51.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.253 "is_configured": false, 00:12:51.253 "data_offset": 0, 00:12:51.253 "data_size": 65536 00:12:51.253 }, 00:12:51.253 { 00:12:51.253 "name": "BaseBdev3", 00:12:51.253 "uuid": "f0797fa5-4ded-50a8-bfcc-cd644b396302", 00:12:51.253 "is_configured": true, 00:12:51.253 "data_offset": 0, 00:12:51.253 "data_size": 65536 00:12:51.253 }, 00:12:51.254 { 00:12:51.254 "name": "BaseBdev4", 00:12:51.254 "uuid": "f9481e99-5c9a-5b73-93c8-7022e480bbea", 00:12:51.254 "is_configured": true, 00:12:51.254 "data_offset": 0, 00:12:51.254 "data_size": 65536 00:12:51.254 } 00:12:51.254 ] 00:12:51.254 }' 00:12:51.254 04:59:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.254 04:59:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.514 [2024-11-21 04:59:08.126550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.514 [2024-11-21 04:59:08.126586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.514 00:12:51.514 Latency(us) 00:12:51.514 [2024-11-21T04:59:08.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.514 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:51.514 raid_bdev1 : 7.82 81.15 243.46 0.00 0.00 16827.10 295.13 110352.32 00:12:51.514 [2024-11-21T04:59:08.249Z] =================================================================================================================== 00:12:51.514 [2024-11-21T04:59:08.249Z] Total : 81.15 243.46 0.00 0.00 16827.10 295.13 110352.32 00:12:51.514 [2024-11-21 04:59:08.193790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.514 [2024-11-21 04:59:08.193836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.514 [2024-11-21 04:59:08.193932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.514 [2024-11-21 04:59:08.193946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:51.514 { 00:12:51.514 "results": [ 00:12:51.514 { 00:12:51.514 "job": "raid_bdev1", 00:12:51.514 "core_mask": "0x1", 00:12:51.514 "workload": "randrw", 00:12:51.514 "percentage": 50, 00:12:51.514 "status": "finished", 00:12:51.514 "queue_depth": 2, 00:12:51.514 "io_size": 3145728, 00:12:51.514 "runtime": 7.824554, 00:12:51.514 "iops": 81.15478530789103, 00:12:51.514 "mibps": 243.4643559236731, 00:12:51.514 "io_failed": 0, 00:12:51.514 "io_timeout": 0, 00:12:51.514 "avg_latency_us": 16827.09838737407, 00:12:51.514 "min_latency_us": 295.12663755458516, 00:12:51.514 "max_latency_us": 110352.32139737991 00:12:51.514 } 00:12:51.514 ], 00:12:51.514 "core_count": 1 00:12:51.514 } 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.514 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:51.774 /dev/nbd0 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.774 1+0 records in 00:12:51.774 1+0 records out 00:12:51.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478516 s, 8.6 MB/s 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.774 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.775 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:52.034 /dev/nbd1 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.034 1+0 records in 00:12:52.034 1+0 records out 00:12:52.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363257 s, 11.3 MB/s 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.034 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:52.294 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.295 04:59:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:52.295 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:52.554 /dev/nbd1 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.554 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.555 1+0 records in 00:12:52.555 1+0 records out 00:12:52.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303147 s, 13.5 MB/s 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.555 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.815 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89489 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89489 ']' 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89489 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89489 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.075 killing process with pid 89489 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89489' 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89489 00:12:53.075 Received shutdown signal, test time was about 9.443271 seconds 00:12:53.075 00:12:53.075 Latency(us) 00:12:53.075 [2024-11-21T04:59:09.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.075 [2024-11-21T04:59:09.810Z] =================================================================================================================== 00:12:53.075 [2024-11-21T04:59:09.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.075 [2024-11-21 04:59:09.805975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.075 04:59:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89489 00:12:53.336 [2024-11-21 04:59:09.851931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.336 04:59:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:53.336 00:12:53.336 real 0m11.404s 00:12:53.336 user 0m14.726s 00:12:53.336 sys 0m1.740s 00:12:53.336 04:59:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.336 04:59:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 ************************************ 00:12:53.336 END TEST raid_rebuild_test_io 00:12:53.336 ************************************ 00:12:53.596 04:59:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:53.596 04:59:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:53.596 04:59:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.596 04:59:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.596 ************************************ 00:12:53.596 START TEST raid_rebuild_test_sb_io 00:12:53.596 ************************************ 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.596 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89876 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89876 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89876 ']' 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.597 04:59:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.597 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.597 Zero copy mechanism will not be used. 00:12:53.597 [2024-11-21 04:59:10.227054] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:12:53.597 [2024-11-21 04:59:10.227207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89876 ] 00:12:53.857 [2024-11-21 04:59:10.395546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.857 [2024-11-21 04:59:10.421123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.857 [2024-11-21 04:59:10.463878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.857 [2024-11-21 04:59:10.463921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 BaseBdev1_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 [2024-11-21 04:59:11.078732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:54.427 [2024-11-21 04:59:11.078817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.427 [2024-11-21 04:59:11.078846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.427 [2024-11-21 04:59:11.078865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.427 [2024-11-21 04:59:11.081063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.427 [2024-11-21 04:59:11.081124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.427 BaseBdev1 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 BaseBdev2_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 [2024-11-21 04:59:11.107223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:54.427 [2024-11-21 04:59:11.107284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.427 [2024-11-21 04:59:11.107319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.427 [2024-11-21 04:59:11.107327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.427 [2024-11-21 04:59:11.109407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.427 [2024-11-21 04:59:11.109442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.427 BaseBdev2 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 BaseBdev3_malloc 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.427 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 [2024-11-21 04:59:11.136412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:54.427 [2024-11-21 04:59:11.136473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.427 [2024-11-21 04:59:11.136497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:54.427 [2024-11-21 04:59:11.136507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.427 [2024-11-21 04:59:11.138581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.427 [2024-11-21 04:59:11.138617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:54.427 BaseBdev3 00:12:54.428 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.428 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.428 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:54.428 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.428 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 BaseBdev4_malloc 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 [2024-11-21 04:59:11.173983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:54.689 [2024-11-21 04:59:11.174041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.689 [2024-11-21 04:59:11.174066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:54.689 [2024-11-21 04:59:11.174076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.689 [2024-11-21 04:59:11.176393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.689 [2024-11-21 04:59:11.176430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:54.689 BaseBdev4 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 spare_malloc 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 spare_delay 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 [2024-11-21 04:59:11.214557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.689 [2024-11-21 04:59:11.214625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.689 [2024-11-21 04:59:11.214646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:54.689 [2024-11-21 04:59:11.214654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.689 [2024-11-21 04:59:11.216747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.689 [2024-11-21 04:59:11.216782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.689 spare 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 [2024-11-21 04:59:11.226605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.689 [2024-11-21 04:59:11.228403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.689 [2024-11-21 04:59:11.228475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.689 [2024-11-21 04:59:11.228519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.689 [2024-11-21 04:59:11.228688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:54.689 [2024-11-21 04:59:11.228719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.689 [2024-11-21 04:59:11.228971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:54.689 [2024-11-21 04:59:11.229149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:54.689 [2024-11-21 04:59:11.229177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:54.689 [2024-11-21 04:59:11.229318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.689 "name": "raid_bdev1", 00:12:54.689 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:54.689 "strip_size_kb": 0, 00:12:54.689 "state": "online", 00:12:54.689 "raid_level": "raid1", 00:12:54.689 "superblock": true, 00:12:54.689 "num_base_bdevs": 4, 00:12:54.689 "num_base_bdevs_discovered": 4, 00:12:54.689 "num_base_bdevs_operational": 4, 00:12:54.689 "base_bdevs_list": [ 00:12:54.689 { 00:12:54.689 "name": "BaseBdev1", 00:12:54.689 "uuid": "b3f0284c-a586-546e-9396-a5ebd0025bf7", 00:12:54.689 "is_configured": true, 00:12:54.689 "data_offset": 2048, 00:12:54.689 "data_size": 63488 00:12:54.689 }, 00:12:54.689 { 00:12:54.689 "name": "BaseBdev2", 00:12:54.689 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:54.689 "is_configured": true, 00:12:54.689 "data_offset": 2048, 00:12:54.689 "data_size": 63488 00:12:54.689 }, 00:12:54.689 { 00:12:54.689 "name": "BaseBdev3", 00:12:54.689 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:54.689 "is_configured": true, 00:12:54.689 "data_offset": 2048, 00:12:54.689 "data_size": 63488 00:12:54.689 }, 00:12:54.689 { 00:12:54.689 "name": "BaseBdev4", 00:12:54.689 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:54.689 "is_configured": true, 00:12:54.689 "data_offset": 2048, 00:12:54.689 "data_size": 63488 00:12:54.689 } 00:12:54.689 ] 00:12:54.689 }' 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.689 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:55.262 [2024-11-21 04:59:11.714143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 [2024-11-21 04:59:11.809586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.262 "name": "raid_bdev1", 00:12:55.262 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:55.262 "strip_size_kb": 0, 00:12:55.262 "state": "online", 00:12:55.262 "raid_level": "raid1", 00:12:55.262 "superblock": true, 00:12:55.262 "num_base_bdevs": 4, 00:12:55.262 "num_base_bdevs_discovered": 3, 00:12:55.262 "num_base_bdevs_operational": 3, 00:12:55.262 "base_bdevs_list": [ 00:12:55.262 { 00:12:55.262 "name": null, 00:12:55.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.262 "is_configured": false, 00:12:55.262 "data_offset": 0, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev2", 00:12:55.262 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev3", 00:12:55.262 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev4", 00:12:55.262 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 } 00:12:55.262 ] 00:12:55.262 }' 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.262 04:59:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.262 [2024-11-21 04:59:11.907482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:55.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.262 Zero copy mechanism will not be used. 00:12:55.262 Running I/O for 60 seconds... 00:12:55.522 04:59:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.522 04:59:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.522 04:59:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.522 [2024-11-21 04:59:12.219579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.522 04:59:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.522 04:59:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:55.522 [2024-11-21 04:59:12.255052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:55.783 [2024-11-21 04:59:12.257147] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.783 [2024-11-21 04:59:12.372962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.783 [2024-11-21 04:59:12.491458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.783 [2024-11-21 04:59:12.491848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.353 [2024-11-21 04:59:12.823903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:56.353 171.00 IOPS, 513.00 MiB/s [2024-11-21T04:59:13.088Z] [2024-11-21 04:59:12.941894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.353 [2024-11-21 04:59:12.942305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.613 "name": "raid_bdev1", 00:12:56.613 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:56.613 "strip_size_kb": 0, 00:12:56.613 "state": "online", 00:12:56.613 "raid_level": "raid1", 00:12:56.613 "superblock": true, 00:12:56.613 "num_base_bdevs": 4, 00:12:56.613 "num_base_bdevs_discovered": 4, 00:12:56.613 "num_base_bdevs_operational": 4, 00:12:56.613 "process": { 00:12:56.613 "type": "rebuild", 00:12:56.613 "target": "spare", 00:12:56.613 "progress": { 00:12:56.613 "blocks": 12288, 00:12:56.613 "percent": 19 00:12:56.613 } 00:12:56.613 }, 00:12:56.613 "base_bdevs_list": [ 00:12:56.613 { 00:12:56.613 "name": "spare", 00:12:56.613 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:12:56.613 "is_configured": true, 00:12:56.613 "data_offset": 2048, 00:12:56.613 "data_size": 63488 00:12:56.613 }, 00:12:56.613 { 00:12:56.613 "name": "BaseBdev2", 00:12:56.613 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:56.613 "is_configured": true, 00:12:56.613 "data_offset": 2048, 00:12:56.613 "data_size": 63488 00:12:56.613 }, 00:12:56.613 { 00:12:56.613 "name": "BaseBdev3", 00:12:56.613 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:56.613 "is_configured": true, 00:12:56.613 "data_offset": 2048, 00:12:56.613 "data_size": 63488 00:12:56.613 }, 00:12:56.613 { 00:12:56.613 "name": "BaseBdev4", 00:12:56.613 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:56.613 "is_configured": true, 00:12:56.613 "data_offset": 2048, 00:12:56.613 "data_size": 63488 00:12:56.613 } 00:12:56.613 ] 00:12:56.613 }' 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.613 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.874 [2024-11-21 04:59:13.394591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.874 [2024-11-21 04:59:13.451884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.874 [2024-11-21 04:59:13.461925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.874 [2024-11-21 04:59:13.462000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.874 [2024-11-21 04:59:13.462032] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.874 [2024-11-21 04:59:13.485748] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.874 "name": "raid_bdev1", 00:12:56.874 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:56.874 "strip_size_kb": 0, 00:12:56.874 "state": "online", 00:12:56.874 "raid_level": "raid1", 00:12:56.874 "superblock": true, 00:12:56.874 "num_base_bdevs": 4, 00:12:56.874 "num_base_bdevs_discovered": 3, 00:12:56.874 "num_base_bdevs_operational": 3, 00:12:56.874 "base_bdevs_list": [ 00:12:56.874 { 00:12:56.874 "name": null, 00:12:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.874 "is_configured": false, 00:12:56.874 "data_offset": 0, 00:12:56.874 "data_size": 63488 00:12:56.874 }, 00:12:56.874 { 00:12:56.874 "name": "BaseBdev2", 00:12:56.874 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:56.874 "is_configured": true, 00:12:56.874 "data_offset": 2048, 00:12:56.874 "data_size": 63488 00:12:56.874 }, 00:12:56.874 { 00:12:56.874 "name": "BaseBdev3", 00:12:56.874 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:56.874 "is_configured": true, 00:12:56.874 "data_offset": 2048, 00:12:56.874 "data_size": 63488 00:12:56.874 }, 00:12:56.874 { 00:12:56.874 "name": "BaseBdev4", 00:12:56.874 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:56.874 "is_configured": true, 00:12:56.874 "data_offset": 2048, 00:12:56.874 "data_size": 63488 00:12:56.874 } 00:12:56.874 ] 00:12:56.874 }' 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.874 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.444 153.50 IOPS, 460.50 MiB/s [2024-11-21T04:59:14.179Z] 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.444 04:59:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.444 "name": "raid_bdev1", 00:12:57.444 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:57.444 "strip_size_kb": 0, 00:12:57.444 "state": "online", 00:12:57.444 "raid_level": "raid1", 00:12:57.444 "superblock": true, 00:12:57.444 "num_base_bdevs": 4, 00:12:57.445 "num_base_bdevs_discovered": 3, 00:12:57.445 "num_base_bdevs_operational": 3, 00:12:57.445 "base_bdevs_list": [ 00:12:57.445 { 00:12:57.445 "name": null, 00:12:57.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.445 "is_configured": false, 00:12:57.445 "data_offset": 0, 00:12:57.445 "data_size": 63488 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "name": "BaseBdev2", 00:12:57.445 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "name": "BaseBdev3", 00:12:57.445 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "name": "BaseBdev4", 00:12:57.445 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 } 00:12:57.445 ] 00:12:57.445 }' 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.445 [2024-11-21 04:59:14.090806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.445 04:59:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:57.445 [2024-11-21 04:59:14.134548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:57.445 [2024-11-21 04:59:14.136645] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.704 [2024-11-21 04:59:14.243535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.704 [2024-11-21 04:59:14.244825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.964 [2024-11-21 04:59:14.469409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.964 [2024-11-21 04:59:14.469698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.224 [2024-11-21 04:59:14.723796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:58.224 135.33 IOPS, 406.00 MiB/s [2024-11-21T04:59:14.959Z] [2024-11-21 04:59:14.946855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.484 [2024-11-21 04:59:15.169938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:58.484 [2024-11-21 04:59:15.170575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:58.484 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.484 "name": "raid_bdev1", 00:12:58.484 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:58.484 "strip_size_kb": 0, 00:12:58.484 "state": "online", 00:12:58.484 "raid_level": "raid1", 00:12:58.484 "superblock": true, 00:12:58.484 "num_base_bdevs": 4, 00:12:58.484 "num_base_bdevs_discovered": 4, 00:12:58.484 "num_base_bdevs_operational": 4, 00:12:58.484 "process": { 00:12:58.484 "type": "rebuild", 00:12:58.484 "target": "spare", 00:12:58.484 "progress": { 00:12:58.484 "blocks": 12288, 00:12:58.484 "percent": 19 00:12:58.484 } 00:12:58.484 }, 00:12:58.484 "base_bdevs_list": [ 00:12:58.484 { 00:12:58.484 "name": "spare", 00:12:58.484 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:12:58.484 "is_configured": true, 00:12:58.484 "data_offset": 2048, 00:12:58.484 "data_size": 63488 00:12:58.484 }, 00:12:58.484 { 00:12:58.484 "name": "BaseBdev2", 00:12:58.485 "uuid": "eb3b1351-d1e6-59eb-9bdd-377139c42328", 00:12:58.485 "is_configured": true, 00:12:58.485 "data_offset": 2048, 00:12:58.485 "data_size": 63488 00:12:58.485 }, 00:12:58.485 { 00:12:58.485 "name": "BaseBdev3", 00:12:58.485 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:58.485 "is_configured": true, 00:12:58.485 "data_offset": 2048, 00:12:58.485 "data_size": 63488 00:12:58.485 }, 00:12:58.485 { 00:12:58.485 "name": "BaseBdev4", 00:12:58.485 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:58.485 "is_configured": true, 00:12:58.485 "data_offset": 2048, 00:12:58.485 "data_size": 63488 00:12:58.485 } 00:12:58.485 ] 00:12:58.485 }' 00:12:58.485 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.485 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.485 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:58.745 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.745 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.745 [2024-11-21 04:59:15.236449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.005 [2024-11-21 04:59:15.516270] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:59.005 [2024-11-21 04:59:15.516348] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:59.005 [2024-11-21 04:59:15.517264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.005 "name": "raid_bdev1", 00:12:59.005 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:59.005 "strip_size_kb": 0, 00:12:59.005 "state": "online", 00:12:59.005 "raid_level": "raid1", 00:12:59.005 "superblock": true, 00:12:59.005 "num_base_bdevs": 4, 00:12:59.005 "num_base_bdevs_discovered": 3, 00:12:59.005 "num_base_bdevs_operational": 3, 00:12:59.005 "process": { 00:12:59.005 "type": "rebuild", 00:12:59.005 "target": "spare", 00:12:59.005 "progress": { 00:12:59.005 "blocks": 16384, 00:12:59.005 "percent": 25 00:12:59.005 } 00:12:59.005 }, 00:12:59.005 "base_bdevs_list": [ 00:12:59.005 { 00:12:59.005 "name": "spare", 00:12:59.005 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:12:59.005 "is_configured": true, 00:12:59.005 "data_offset": 2048, 00:12:59.005 "data_size": 63488 00:12:59.005 }, 00:12:59.005 { 00:12:59.005 "name": null, 00:12:59.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.005 "is_configured": false, 00:12:59.005 "data_offset": 0, 00:12:59.005 "data_size": 63488 00:12:59.005 }, 00:12:59.005 { 00:12:59.005 "name": "BaseBdev3", 00:12:59.005 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:59.005 "is_configured": true, 00:12:59.005 "data_offset": 2048, 00:12:59.005 "data_size": 63488 00:12:59.005 }, 00:12:59.005 { 00:12:59.005 "name": "BaseBdev4", 00:12:59.005 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:59.005 "is_configured": true, 00:12:59.005 "data_offset": 2048, 00:12:59.005 "data_size": 63488 00:12:59.005 } 00:12:59.005 ] 00:12:59.005 }' 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.005 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.005 "name": "raid_bdev1", 00:12:59.005 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:12:59.005 "strip_size_kb": 0, 00:12:59.005 "state": "online", 00:12:59.005 "raid_level": "raid1", 00:12:59.005 "superblock": true, 00:12:59.005 "num_base_bdevs": 4, 00:12:59.005 "num_base_bdevs_discovered": 3, 00:12:59.005 "num_base_bdevs_operational": 3, 00:12:59.005 "process": { 00:12:59.005 "type": "rebuild", 00:12:59.005 "target": "spare", 00:12:59.005 "progress": { 00:12:59.005 "blocks": 18432, 00:12:59.005 "percent": 29 00:12:59.005 } 00:12:59.005 }, 00:12:59.005 "base_bdevs_list": [ 00:12:59.005 { 00:12:59.005 "name": "spare", 00:12:59.005 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:12:59.005 "is_configured": true, 00:12:59.005 "data_offset": 2048, 00:12:59.005 "data_size": 63488 00:12:59.005 }, 00:12:59.005 { 00:12:59.005 "name": null, 00:12:59.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.005 "is_configured": false, 00:12:59.005 "data_offset": 0, 00:12:59.005 "data_size": 63488 00:12:59.005 }, 00:12:59.005 { 00:12:59.006 "name": "BaseBdev3", 00:12:59.006 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:12:59.006 "is_configured": true, 00:12:59.006 "data_offset": 2048, 00:12:59.006 "data_size": 63488 00:12:59.006 }, 00:12:59.006 { 00:12:59.006 "name": "BaseBdev4", 00:12:59.006 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:12:59.006 "is_configured": true, 00:12:59.006 "data_offset": 2048, 00:12:59.006 "data_size": 63488 00:12:59.006 } 00:12:59.006 ] 00:12:59.006 }' 00:12:59.006 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.265 [2024-11-21 04:59:15.748771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:59.265 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.265 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.265 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.265 04:59:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.266 [2024-11-21 04:59:15.857036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:59.266 [2024-11-21 04:59:15.857441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:59.526 116.00 IOPS, 348.00 MiB/s [2024-11-21T04:59:16.261Z] [2024-11-21 04:59:16.202442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:59.526 [2024-11-21 04:59:16.203316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:59.786 [2024-11-21 04:59:16.433038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.356 "name": "raid_bdev1", 00:13:00.356 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:00.356 "strip_size_kb": 0, 00:13:00.356 "state": "online", 00:13:00.356 "raid_level": "raid1", 00:13:00.356 "superblock": true, 00:13:00.356 "num_base_bdevs": 4, 00:13:00.356 "num_base_bdevs_discovered": 3, 00:13:00.357 "num_base_bdevs_operational": 3, 00:13:00.357 "process": { 00:13:00.357 "type": "rebuild", 00:13:00.357 "target": "spare", 00:13:00.357 "progress": { 00:13:00.357 "blocks": 32768, 00:13:00.357 "percent": 51 00:13:00.357 } 00:13:00.357 }, 00:13:00.357 "base_bdevs_list": [ 00:13:00.357 { 00:13:00.357 "name": "spare", 00:13:00.357 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:00.357 "is_configured": true, 00:13:00.357 "data_offset": 2048, 00:13:00.357 "data_size": 63488 00:13:00.357 }, 00:13:00.357 { 00:13:00.357 "name": null, 00:13:00.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.357 "is_configured": false, 00:13:00.357 "data_offset": 0, 00:13:00.357 "data_size": 63488 00:13:00.357 }, 00:13:00.357 { 00:13:00.357 "name": "BaseBdev3", 00:13:00.357 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:00.357 "is_configured": true, 00:13:00.357 "data_offset": 2048, 00:13:00.357 "data_size": 63488 00:13:00.357 }, 00:13:00.357 { 00:13:00.357 "name": "BaseBdev4", 00:13:00.357 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:00.357 "is_configured": true, 00:13:00.357 "data_offset": 2048, 00:13:00.357 "data_size": 63488 00:13:00.357 } 00:13:00.357 ] 00:13:00.357 }' 00:13:00.357 [2024-11-21 04:59:16.865926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:00.357 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.357 101.40 IOPS, 304.20 MiB/s [2024-11-21T04:59:17.092Z] 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.357 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.357 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.357 04:59:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.357 [2024-11-21 04:59:17.088764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:00.357 [2024-11-21 04:59:17.089414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:01.297 90.33 IOPS, 271.00 MiB/s [2024-11-21T04:59:18.032Z] [2024-11-21 04:59:17.913400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.297 04:59:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.297 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.297 "name": "raid_bdev1", 00:13:01.297 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:01.297 "strip_size_kb": 0, 00:13:01.297 "state": "online", 00:13:01.297 "raid_level": "raid1", 00:13:01.297 "superblock": true, 00:13:01.297 "num_base_bdevs": 4, 00:13:01.297 "num_base_bdevs_discovered": 3, 00:13:01.297 "num_base_bdevs_operational": 3, 00:13:01.297 "process": { 00:13:01.297 "type": "rebuild", 00:13:01.297 "target": "spare", 00:13:01.297 "progress": { 00:13:01.297 "blocks": 53248, 00:13:01.297 "percent": 83 00:13:01.297 } 00:13:01.297 }, 00:13:01.297 "base_bdevs_list": [ 00:13:01.297 { 00:13:01.297 "name": "spare", 00:13:01.297 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:01.297 "is_configured": true, 00:13:01.297 "data_offset": 2048, 00:13:01.297 "data_size": 63488 00:13:01.297 }, 00:13:01.297 { 00:13:01.297 "name": null, 00:13:01.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.297 "is_configured": false, 00:13:01.297 "data_offset": 0, 00:13:01.297 "data_size": 63488 00:13:01.297 }, 00:13:01.297 { 00:13:01.297 "name": "BaseBdev3", 00:13:01.297 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:01.297 "is_configured": true, 00:13:01.297 "data_offset": 2048, 00:13:01.297 "data_size": 63488 00:13:01.297 }, 00:13:01.297 { 00:13:01.297 "name": "BaseBdev4", 00:13:01.297 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:01.297 "is_configured": true, 00:13:01.297 "data_offset": 2048, 00:13:01.297 "data_size": 63488 00:13:01.297 } 00:13:01.297 ] 00:13:01.297 }' 00:13:01.297 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.557 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.557 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.557 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.557 04:59:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.126 [2024-11-21 04:59:18.551537] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:02.126 [2024-11-21 04:59:18.571059] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:02.126 [2024-11-21 04:59:18.574041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.646 83.14 IOPS, 249.43 MiB/s [2024-11-21T04:59:19.381Z] 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.646 "name": "raid_bdev1", 00:13:02.646 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:02.646 "strip_size_kb": 0, 00:13:02.646 "state": "online", 00:13:02.646 "raid_level": "raid1", 00:13:02.646 "superblock": true, 00:13:02.646 "num_base_bdevs": 4, 00:13:02.646 "num_base_bdevs_discovered": 3, 00:13:02.646 "num_base_bdevs_operational": 3, 00:13:02.646 "base_bdevs_list": [ 00:13:02.646 { 00:13:02.646 "name": "spare", 00:13:02.646 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": null, 00:13:02.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.646 "is_configured": false, 00:13:02.646 "data_offset": 0, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": "BaseBdev3", 00:13:02.646 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": "BaseBdev4", 00:13:02.646 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 } 00:13:02.646 ] 00:13:02.646 }' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.646 "name": "raid_bdev1", 00:13:02.646 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:02.646 "strip_size_kb": 0, 00:13:02.646 "state": "online", 00:13:02.646 "raid_level": "raid1", 00:13:02.646 "superblock": true, 00:13:02.646 "num_base_bdevs": 4, 00:13:02.646 "num_base_bdevs_discovered": 3, 00:13:02.646 "num_base_bdevs_operational": 3, 00:13:02.646 "base_bdevs_list": [ 00:13:02.646 { 00:13:02.646 "name": "spare", 00:13:02.646 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": null, 00:13:02.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.646 "is_configured": false, 00:13:02.646 "data_offset": 0, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": "BaseBdev3", 00:13:02.646 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 }, 00:13:02.646 { 00:13:02.646 "name": "BaseBdev4", 00:13:02.646 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:02.646 "is_configured": true, 00:13:02.646 "data_offset": 2048, 00:13:02.646 "data_size": 63488 00:13:02.646 } 00:13:02.646 ] 00:13:02.646 }' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.646 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.905 "name": "raid_bdev1", 00:13:02.905 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:02.905 "strip_size_kb": 0, 00:13:02.905 "state": "online", 00:13:02.905 "raid_level": "raid1", 00:13:02.905 "superblock": true, 00:13:02.905 "num_base_bdevs": 4, 00:13:02.905 "num_base_bdevs_discovered": 3, 00:13:02.905 "num_base_bdevs_operational": 3, 00:13:02.905 "base_bdevs_list": [ 00:13:02.905 { 00:13:02.905 "name": "spare", 00:13:02.905 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:02.905 "is_configured": true, 00:13:02.905 "data_offset": 2048, 00:13:02.905 "data_size": 63488 00:13:02.905 }, 00:13:02.905 { 00:13:02.905 "name": null, 00:13:02.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.905 "is_configured": false, 00:13:02.905 "data_offset": 0, 00:13:02.905 "data_size": 63488 00:13:02.905 }, 00:13:02.905 { 00:13:02.905 "name": "BaseBdev3", 00:13:02.905 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:02.905 "is_configured": true, 00:13:02.905 "data_offset": 2048, 00:13:02.905 "data_size": 63488 00:13:02.905 }, 00:13:02.905 { 00:13:02.905 "name": "BaseBdev4", 00:13:02.905 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:02.905 "is_configured": true, 00:13:02.905 "data_offset": 2048, 00:13:02.905 "data_size": 63488 00:13:02.905 } 00:13:02.905 ] 00:13:02.905 }' 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.905 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 [2024-11-21 04:59:19.849773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.165 [2024-11-21 04:59:19.849818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.165 00:13:03.165 Latency(us) 00:13:03.165 [2024-11-21T04:59:19.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.165 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:03.165 raid_bdev1 : 7.98 77.83 233.48 0.00 0.00 17982.62 300.49 116304.94 00:13:03.165 [2024-11-21T04:59:19.900Z] =================================================================================================================== 00:13:03.165 [2024-11-21T04:59:19.900Z] Total : 77.83 233.48 0.00 0.00 17982.62 300.49 116304.94 00:13:03.165 [2024-11-21 04:59:19.877662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.165 [2024-11-21 04:59:19.877725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.165 [2024-11-21 04:59:19.877893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.165 [2024-11-21 04:59:19.877921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:03.165 { 00:13:03.165 "results": [ 00:13:03.165 { 00:13:03.165 "job": "raid_bdev1", 00:13:03.165 "core_mask": "0x1", 00:13:03.165 "workload": "randrw", 00:13:03.165 "percentage": 50, 00:13:03.165 "status": "finished", 00:13:03.165 "queue_depth": 2, 00:13:03.165 "io_size": 3145728, 00:13:03.165 "runtime": 7.979314, 00:13:03.165 "iops": 77.82623919800625, 00:13:03.165 "mibps": 233.47871759401875, 00:13:03.165 "io_failed": 0, 00:13:03.165 "io_timeout": 0, 00:13:03.165 "avg_latency_us": 17982.62494216259, 00:13:03.165 "min_latency_us": 300.49257641921395, 00:13:03.165 "max_latency_us": 116304.93624454149 00:13:03.165 } 00:13:03.165 ], 00:13:03.165 "core_count": 1 00:13:03.165 } 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:03.165 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.426 04:59:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:03.426 /dev/nbd0 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.426 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:03.685 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.685 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.685 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.685 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.685 1+0 records in 00:13:03.685 1+0 records out 00:13:03.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454941 s, 9.0 MB/s 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:03.686 /dev/nbd1 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.686 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.947 1+0 records in 00:13:03.947 1+0 records out 00:13:03.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405435 s, 10.1 MB/s 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.947 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:04.207 /dev/nbd1 00:13:04.207 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.467 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.468 1+0 records in 00:13:04.468 1+0 records out 00:13:04.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284142 s, 14.4 MB/s 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.468 04:59:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.468 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 [2024-11-21 04:59:21.450465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.728 [2024-11-21 04:59:21.450568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.728 [2024-11-21 04:59:21.450595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:04.728 [2024-11-21 04:59:21.450606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.728 [2024-11-21 04:59:21.452749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.728 [2024-11-21 04:59:21.452790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.728 [2024-11-21 04:59:21.452878] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.728 [2024-11-21 04:59:21.452919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.728 [2024-11-21 04:59:21.453029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.728 [2024-11-21 04:59:21.453166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:04.728 spare 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.728 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:04.729 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.729 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.989 [2024-11-21 04:59:21.553072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:04.989 [2024-11-21 04:59:21.553150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.989 [2024-11-21 04:59:21.553521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:04.989 [2024-11-21 04:59:21.553708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:04.989 [2024-11-21 04:59:21.553717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:04.989 [2024-11-21 04:59:21.553882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.989 "name": "raid_bdev1", 00:13:04.989 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:04.989 "strip_size_kb": 0, 00:13:04.989 "state": "online", 00:13:04.989 "raid_level": "raid1", 00:13:04.989 "superblock": true, 00:13:04.989 "num_base_bdevs": 4, 00:13:04.989 "num_base_bdevs_discovered": 3, 00:13:04.989 "num_base_bdevs_operational": 3, 00:13:04.989 "base_bdevs_list": [ 00:13:04.989 { 00:13:04.989 "name": "spare", 00:13:04.989 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:04.989 "is_configured": true, 00:13:04.989 "data_offset": 2048, 00:13:04.989 "data_size": 63488 00:13:04.989 }, 00:13:04.989 { 00:13:04.989 "name": null, 00:13:04.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.989 "is_configured": false, 00:13:04.989 "data_offset": 2048, 00:13:04.989 "data_size": 63488 00:13:04.989 }, 00:13:04.989 { 00:13:04.989 "name": "BaseBdev3", 00:13:04.989 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:04.989 "is_configured": true, 00:13:04.989 "data_offset": 2048, 00:13:04.989 "data_size": 63488 00:13:04.989 }, 00:13:04.989 { 00:13:04.989 "name": "BaseBdev4", 00:13:04.989 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:04.989 "is_configured": true, 00:13:04.989 "data_offset": 2048, 00:13:04.989 "data_size": 63488 00:13:04.989 } 00:13:04.989 ] 00:13:04.989 }' 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.989 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.248 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.508 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.508 "name": "raid_bdev1", 00:13:05.508 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:05.508 "strip_size_kb": 0, 00:13:05.508 "state": "online", 00:13:05.508 "raid_level": "raid1", 00:13:05.508 "superblock": true, 00:13:05.508 "num_base_bdevs": 4, 00:13:05.508 "num_base_bdevs_discovered": 3, 00:13:05.508 "num_base_bdevs_operational": 3, 00:13:05.508 "base_bdevs_list": [ 00:13:05.508 { 00:13:05.508 "name": "spare", 00:13:05.509 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:05.509 "is_configured": true, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": null, 00:13:05.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.509 "is_configured": false, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": "BaseBdev3", 00:13:05.509 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:05.509 "is_configured": true, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": "BaseBdev4", 00:13:05.509 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:05.509 "is_configured": true, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 } 00:13:05.509 ] 00:13:05.509 }' 00:13:05.509 04:59:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.509 [2024-11-21 04:59:22.121475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.509 "name": "raid_bdev1", 00:13:05.509 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:05.509 "strip_size_kb": 0, 00:13:05.509 "state": "online", 00:13:05.509 "raid_level": "raid1", 00:13:05.509 "superblock": true, 00:13:05.509 "num_base_bdevs": 4, 00:13:05.509 "num_base_bdevs_discovered": 2, 00:13:05.509 "num_base_bdevs_operational": 2, 00:13:05.509 "base_bdevs_list": [ 00:13:05.509 { 00:13:05.509 "name": null, 00:13:05.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.509 "is_configured": false, 00:13:05.509 "data_offset": 0, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": null, 00:13:05.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.509 "is_configured": false, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": "BaseBdev3", 00:13:05.509 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:05.509 "is_configured": true, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 }, 00:13:05.509 { 00:13:05.509 "name": "BaseBdev4", 00:13:05.509 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:05.509 "is_configured": true, 00:13:05.509 "data_offset": 2048, 00:13:05.509 "data_size": 63488 00:13:05.509 } 00:13:05.509 ] 00:13:05.509 }' 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.509 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.079 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.079 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.079 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.079 [2024-11-21 04:59:22.580775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.079 [2024-11-21 04:59:22.581109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:06.079 [2024-11-21 04:59:22.581175] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:06.079 [2024-11-21 04:59:22.581284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.079 [2024-11-21 04:59:22.585865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:06.079 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.079 04:59:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:06.079 [2024-11-21 04:59:22.588058] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.020 "name": "raid_bdev1", 00:13:07.020 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:07.020 "strip_size_kb": 0, 00:13:07.020 "state": "online", 00:13:07.020 "raid_level": "raid1", 00:13:07.020 "superblock": true, 00:13:07.020 "num_base_bdevs": 4, 00:13:07.020 "num_base_bdevs_discovered": 3, 00:13:07.020 "num_base_bdevs_operational": 3, 00:13:07.020 "process": { 00:13:07.020 "type": "rebuild", 00:13:07.020 "target": "spare", 00:13:07.020 "progress": { 00:13:07.020 "blocks": 20480, 00:13:07.020 "percent": 32 00:13:07.020 } 00:13:07.020 }, 00:13:07.020 "base_bdevs_list": [ 00:13:07.020 { 00:13:07.020 "name": "spare", 00:13:07.020 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:07.020 "is_configured": true, 00:13:07.020 "data_offset": 2048, 00:13:07.020 "data_size": 63488 00:13:07.020 }, 00:13:07.020 { 00:13:07.020 "name": null, 00:13:07.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.020 "is_configured": false, 00:13:07.020 "data_offset": 2048, 00:13:07.020 "data_size": 63488 00:13:07.020 }, 00:13:07.020 { 00:13:07.020 "name": "BaseBdev3", 00:13:07.020 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:07.020 "is_configured": true, 00:13:07.020 "data_offset": 2048, 00:13:07.020 "data_size": 63488 00:13:07.020 }, 00:13:07.020 { 00:13:07.020 "name": "BaseBdev4", 00:13:07.020 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:07.020 "is_configured": true, 00:13:07.020 "data_offset": 2048, 00:13:07.020 "data_size": 63488 00:13:07.020 } 00:13:07.020 ] 00:13:07.020 }' 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.020 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.020 [2024-11-21 04:59:23.736311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.280 [2024-11-21 04:59:23.793256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:07.280 [2024-11-21 04:59:23.793416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.280 [2024-11-21 04:59:23.793436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.280 [2024-11-21 04:59:23.793446] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.280 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.281 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.281 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.281 "name": "raid_bdev1", 00:13:07.281 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:07.281 "strip_size_kb": 0, 00:13:07.281 "state": "online", 00:13:07.281 "raid_level": "raid1", 00:13:07.281 "superblock": true, 00:13:07.281 "num_base_bdevs": 4, 00:13:07.281 "num_base_bdevs_discovered": 2, 00:13:07.281 "num_base_bdevs_operational": 2, 00:13:07.281 "base_bdevs_list": [ 00:13:07.281 { 00:13:07.281 "name": null, 00:13:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.281 "is_configured": false, 00:13:07.281 "data_offset": 0, 00:13:07.281 "data_size": 63488 00:13:07.281 }, 00:13:07.281 { 00:13:07.281 "name": null, 00:13:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.281 "is_configured": false, 00:13:07.281 "data_offset": 2048, 00:13:07.281 "data_size": 63488 00:13:07.281 }, 00:13:07.281 { 00:13:07.281 "name": "BaseBdev3", 00:13:07.281 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:07.281 "is_configured": true, 00:13:07.281 "data_offset": 2048, 00:13:07.281 "data_size": 63488 00:13:07.281 }, 00:13:07.281 { 00:13:07.281 "name": "BaseBdev4", 00:13:07.281 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:07.281 "is_configured": true, 00:13:07.281 "data_offset": 2048, 00:13:07.281 "data_size": 63488 00:13:07.281 } 00:13:07.281 ] 00:13:07.281 }' 00:13:07.281 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.281 04:59:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.540 04:59:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.540 04:59:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.540 04:59:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.540 [2024-11-21 04:59:24.237566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.540 [2024-11-21 04:59:24.237701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.540 [2024-11-21 04:59:24.237747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:07.540 [2024-11-21 04:59:24.237778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.540 [2024-11-21 04:59:24.238340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.540 [2024-11-21 04:59:24.238405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.540 [2024-11-21 04:59:24.238526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:07.540 [2024-11-21 04:59:24.238588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:07.540 [2024-11-21 04:59:24.238638] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:07.540 [2024-11-21 04:59:24.238710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.540 [2024-11-21 04:59:24.243288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:07.540 spare 00:13:07.540 04:59:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.540 04:59:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:07.540 [2024-11-21 04:59:24.245255] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.929 "name": "raid_bdev1", 00:13:08.929 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:08.929 "strip_size_kb": 0, 00:13:08.929 "state": "online", 00:13:08.929 "raid_level": "raid1", 00:13:08.929 "superblock": true, 00:13:08.929 "num_base_bdevs": 4, 00:13:08.929 "num_base_bdevs_discovered": 3, 00:13:08.929 "num_base_bdevs_operational": 3, 00:13:08.929 "process": { 00:13:08.929 "type": "rebuild", 00:13:08.929 "target": "spare", 00:13:08.929 "progress": { 00:13:08.929 "blocks": 20480, 00:13:08.929 "percent": 32 00:13:08.929 } 00:13:08.929 }, 00:13:08.929 "base_bdevs_list": [ 00:13:08.929 { 00:13:08.929 "name": "spare", 00:13:08.929 "uuid": "c02469c0-3ae8-59ec-9654-4d3d8402513f", 00:13:08.929 "is_configured": true, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": null, 00:13:08.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.929 "is_configured": false, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": "BaseBdev3", 00:13:08.929 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:08.929 "is_configured": true, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": "BaseBdev4", 00:13:08.929 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:08.929 "is_configured": true, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 } 00:13:08.929 ] 00:13:08.929 }' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.929 [2024-11-21 04:59:25.357975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.929 [2024-11-21 04:59:25.450402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.929 [2024-11-21 04:59:25.450509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.929 [2024-11-21 04:59:25.450530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.929 [2024-11-21 04:59:25.450537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.929 "name": "raid_bdev1", 00:13:08.929 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:08.929 "strip_size_kb": 0, 00:13:08.929 "state": "online", 00:13:08.929 "raid_level": "raid1", 00:13:08.929 "superblock": true, 00:13:08.929 "num_base_bdevs": 4, 00:13:08.929 "num_base_bdevs_discovered": 2, 00:13:08.929 "num_base_bdevs_operational": 2, 00:13:08.929 "base_bdevs_list": [ 00:13:08.929 { 00:13:08.929 "name": null, 00:13:08.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.929 "is_configured": false, 00:13:08.929 "data_offset": 0, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": null, 00:13:08.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.929 "is_configured": false, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": "BaseBdev3", 00:13:08.929 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:08.929 "is_configured": true, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 }, 00:13:08.929 { 00:13:08.929 "name": "BaseBdev4", 00:13:08.929 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:08.929 "is_configured": true, 00:13:08.929 "data_offset": 2048, 00:13:08.929 "data_size": 63488 00:13:08.929 } 00:13:08.929 ] 00:13:08.929 }' 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.929 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.189 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.450 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.450 "name": "raid_bdev1", 00:13:09.450 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:09.450 "strip_size_kb": 0, 00:13:09.450 "state": "online", 00:13:09.450 "raid_level": "raid1", 00:13:09.450 "superblock": true, 00:13:09.450 "num_base_bdevs": 4, 00:13:09.450 "num_base_bdevs_discovered": 2, 00:13:09.450 "num_base_bdevs_operational": 2, 00:13:09.450 "base_bdevs_list": [ 00:13:09.450 { 00:13:09.450 "name": null, 00:13:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.450 "is_configured": false, 00:13:09.450 "data_offset": 0, 00:13:09.450 "data_size": 63488 00:13:09.450 }, 00:13:09.450 { 00:13:09.450 "name": null, 00:13:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.450 "is_configured": false, 00:13:09.450 "data_offset": 2048, 00:13:09.450 "data_size": 63488 00:13:09.450 }, 00:13:09.450 { 00:13:09.450 "name": "BaseBdev3", 00:13:09.450 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:09.450 "is_configured": true, 00:13:09.450 "data_offset": 2048, 00:13:09.450 "data_size": 63488 00:13:09.450 }, 00:13:09.450 { 00:13:09.450 "name": "BaseBdev4", 00:13:09.450 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:09.450 "is_configured": true, 00:13:09.450 "data_offset": 2048, 00:13:09.450 "data_size": 63488 00:13:09.450 } 00:13:09.450 ] 00:13:09.450 }' 00:13:09.450 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.450 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.450 04:59:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 [2024-11-21 04:59:26.046281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.450 [2024-11-21 04:59:26.046339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.450 [2024-11-21 04:59:26.046363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:09.450 [2024-11-21 04:59:26.046372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.450 [2024-11-21 04:59:26.046818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.450 [2024-11-21 04:59:26.046844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.450 [2024-11-21 04:59:26.046924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:09.450 [2024-11-21 04:59:26.046944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:09.450 [2024-11-21 04:59:26.046956] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:09.450 [2024-11-21 04:59:26.046966] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:09.450 BaseBdev1 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.450 04:59:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.388 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.388 "name": "raid_bdev1", 00:13:10.388 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:10.388 "strip_size_kb": 0, 00:13:10.388 "state": "online", 00:13:10.388 "raid_level": "raid1", 00:13:10.388 "superblock": true, 00:13:10.388 "num_base_bdevs": 4, 00:13:10.388 "num_base_bdevs_discovered": 2, 00:13:10.388 "num_base_bdevs_operational": 2, 00:13:10.388 "base_bdevs_list": [ 00:13:10.388 { 00:13:10.388 "name": null, 00:13:10.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.388 "is_configured": false, 00:13:10.388 "data_offset": 0, 00:13:10.388 "data_size": 63488 00:13:10.388 }, 00:13:10.388 { 00:13:10.388 "name": null, 00:13:10.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.388 "is_configured": false, 00:13:10.388 "data_offset": 2048, 00:13:10.388 "data_size": 63488 00:13:10.388 }, 00:13:10.388 { 00:13:10.388 "name": "BaseBdev3", 00:13:10.388 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:10.388 "is_configured": true, 00:13:10.388 "data_offset": 2048, 00:13:10.388 "data_size": 63488 00:13:10.388 }, 00:13:10.388 { 00:13:10.388 "name": "BaseBdev4", 00:13:10.388 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:10.388 "is_configured": true, 00:13:10.388 "data_offset": 2048, 00:13:10.389 "data_size": 63488 00:13:10.389 } 00:13:10.389 ] 00:13:10.389 }' 00:13:10.389 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.389 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.957 "name": "raid_bdev1", 00:13:10.957 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:10.957 "strip_size_kb": 0, 00:13:10.957 "state": "online", 00:13:10.957 "raid_level": "raid1", 00:13:10.957 "superblock": true, 00:13:10.957 "num_base_bdevs": 4, 00:13:10.957 "num_base_bdevs_discovered": 2, 00:13:10.957 "num_base_bdevs_operational": 2, 00:13:10.957 "base_bdevs_list": [ 00:13:10.957 { 00:13:10.957 "name": null, 00:13:10.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.957 "is_configured": false, 00:13:10.957 "data_offset": 0, 00:13:10.957 "data_size": 63488 00:13:10.957 }, 00:13:10.957 { 00:13:10.957 "name": null, 00:13:10.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.957 "is_configured": false, 00:13:10.957 "data_offset": 2048, 00:13:10.957 "data_size": 63488 00:13:10.957 }, 00:13:10.957 { 00:13:10.957 "name": "BaseBdev3", 00:13:10.957 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:10.957 "is_configured": true, 00:13:10.957 "data_offset": 2048, 00:13:10.957 "data_size": 63488 00:13:10.957 }, 00:13:10.957 { 00:13:10.957 "name": "BaseBdev4", 00:13:10.957 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:10.957 "is_configured": true, 00:13:10.957 "data_offset": 2048, 00:13:10.957 "data_size": 63488 00:13:10.957 } 00:13:10.957 ] 00:13:10.957 }' 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.957 [2024-11-21 04:59:27.671681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.957 [2024-11-21 04:59:27.671874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:10.957 [2024-11-21 04:59:27.671898] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:10.957 request: 00:13:10.957 { 00:13:10.957 "base_bdev": "BaseBdev1", 00:13:10.957 "raid_bdev": "raid_bdev1", 00:13:10.957 "method": "bdev_raid_add_base_bdev", 00:13:10.957 "req_id": 1 00:13:10.957 } 00:13:10.957 Got JSON-RPC error response 00:13:10.957 response: 00:13:10.957 { 00:13:10.957 "code": -22, 00:13:10.957 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:10.957 } 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.957 04:59:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.337 "name": "raid_bdev1", 00:13:12.337 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:12.337 "strip_size_kb": 0, 00:13:12.337 "state": "online", 00:13:12.337 "raid_level": "raid1", 00:13:12.337 "superblock": true, 00:13:12.337 "num_base_bdevs": 4, 00:13:12.337 "num_base_bdevs_discovered": 2, 00:13:12.337 "num_base_bdevs_operational": 2, 00:13:12.337 "base_bdevs_list": [ 00:13:12.337 { 00:13:12.337 "name": null, 00:13:12.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.337 "is_configured": false, 00:13:12.337 "data_offset": 0, 00:13:12.337 "data_size": 63488 00:13:12.337 }, 00:13:12.337 { 00:13:12.337 "name": null, 00:13:12.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.337 "is_configured": false, 00:13:12.337 "data_offset": 2048, 00:13:12.337 "data_size": 63488 00:13:12.337 }, 00:13:12.337 { 00:13:12.337 "name": "BaseBdev3", 00:13:12.337 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:12.337 "is_configured": true, 00:13:12.337 "data_offset": 2048, 00:13:12.337 "data_size": 63488 00:13:12.337 }, 00:13:12.337 { 00:13:12.337 "name": "BaseBdev4", 00:13:12.337 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:12.337 "is_configured": true, 00:13:12.337 "data_offset": 2048, 00:13:12.337 "data_size": 63488 00:13:12.337 } 00:13:12.337 ] 00:13:12.337 }' 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.337 04:59:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.597 "name": "raid_bdev1", 00:13:12.597 "uuid": "28b7b7bc-cbd4-4428-84e5-bed67b3264fc", 00:13:12.597 "strip_size_kb": 0, 00:13:12.597 "state": "online", 00:13:12.597 "raid_level": "raid1", 00:13:12.597 "superblock": true, 00:13:12.597 "num_base_bdevs": 4, 00:13:12.597 "num_base_bdevs_discovered": 2, 00:13:12.597 "num_base_bdevs_operational": 2, 00:13:12.597 "base_bdevs_list": [ 00:13:12.597 { 00:13:12.597 "name": null, 00:13:12.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.597 "is_configured": false, 00:13:12.597 "data_offset": 0, 00:13:12.597 "data_size": 63488 00:13:12.597 }, 00:13:12.597 { 00:13:12.597 "name": null, 00:13:12.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.597 "is_configured": false, 00:13:12.597 "data_offset": 2048, 00:13:12.597 "data_size": 63488 00:13:12.597 }, 00:13:12.597 { 00:13:12.597 "name": "BaseBdev3", 00:13:12.597 "uuid": "99f40ad1-896f-5e82-952a-1a3e28c6bcc6", 00:13:12.597 "is_configured": true, 00:13:12.597 "data_offset": 2048, 00:13:12.597 "data_size": 63488 00:13:12.597 }, 00:13:12.597 { 00:13:12.597 "name": "BaseBdev4", 00:13:12.597 "uuid": "c360fe5b-c5f7-5801-8cd8-1d206d291a01", 00:13:12.597 "is_configured": true, 00:13:12.597 "data_offset": 2048, 00:13:12.597 "data_size": 63488 00:13:12.597 } 00:13:12.597 ] 00:13:12.597 }' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89876 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89876 ']' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89876 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89876 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89876' 00:13:12.597 killing process with pid 89876 00:13:12.597 Received shutdown signal, test time was about 17.409340 seconds 00:13:12.597 00:13:12.597 Latency(us) 00:13:12.597 [2024-11-21T04:59:29.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.597 [2024-11-21T04:59:29.332Z] =================================================================================================================== 00:13:12.597 [2024-11-21T04:59:29.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89876 00:13:12.597 [2024-11-21 04:59:29.285486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.597 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89876 00:13:12.597 [2024-11-21 04:59:29.285651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.597 [2024-11-21 04:59:29.285724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.597 [2024-11-21 04:59:29.285737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:12.856 [2024-11-21 04:59:29.333763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.857 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:12.857 00:13:12.857 real 0m19.410s 00:13:12.857 user 0m25.813s 00:13:12.857 sys 0m2.400s 00:13:12.857 ************************************ 00:13:12.857 END TEST raid_rebuild_test_sb_io 00:13:12.857 ************************************ 00:13:12.857 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.857 04:59:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.115 04:59:29 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:13.115 04:59:29 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:13.115 04:59:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:13.115 04:59:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.115 04:59:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.115 ************************************ 00:13:13.115 START TEST raid5f_state_function_test 00:13:13.115 ************************************ 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90581 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90581' 00:13:13.115 Process raid pid: 90581 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90581 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 90581 ']' 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.115 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.116 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.116 04:59:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.116 [2024-11-21 04:59:29.712820] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:13.116 [2024-11-21 04:59:29.713045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.389 [2024-11-21 04:59:29.885387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.389 [2024-11-21 04:59:29.912188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.389 [2024-11-21 04:59:29.956608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.389 [2024-11-21 04:59:29.956725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.958 [2024-11-21 04:59:30.559143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.958 [2024-11-21 04:59:30.559253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.958 [2024-11-21 04:59:30.559275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.958 [2024-11-21 04:59:30.559287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.958 [2024-11-21 04:59:30.559295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.958 [2024-11-21 04:59:30.559317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.958 "name": "Existed_Raid", 00:13:13.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.958 "strip_size_kb": 64, 00:13:13.958 "state": "configuring", 00:13:13.958 "raid_level": "raid5f", 00:13:13.958 "superblock": false, 00:13:13.958 "num_base_bdevs": 3, 00:13:13.958 "num_base_bdevs_discovered": 0, 00:13:13.958 "num_base_bdevs_operational": 3, 00:13:13.958 "base_bdevs_list": [ 00:13:13.958 { 00:13:13.958 "name": "BaseBdev1", 00:13:13.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.958 "is_configured": false, 00:13:13.958 "data_offset": 0, 00:13:13.958 "data_size": 0 00:13:13.958 }, 00:13:13.958 { 00:13:13.958 "name": "BaseBdev2", 00:13:13.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.958 "is_configured": false, 00:13:13.958 "data_offset": 0, 00:13:13.958 "data_size": 0 00:13:13.958 }, 00:13:13.958 { 00:13:13.958 "name": "BaseBdev3", 00:13:13.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.958 "is_configured": false, 00:13:13.958 "data_offset": 0, 00:13:13.958 "data_size": 0 00:13:13.958 } 00:13:13.958 ] 00:13:13.958 }' 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.958 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [2024-11-21 04:59:30.990272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.528 [2024-11-21 04:59:30.990314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [2024-11-21 04:59:30.998256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.528 [2024-11-21 04:59:30.998298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.528 [2024-11-21 04:59:30.998307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.528 [2024-11-21 04:59:30.998316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.528 [2024-11-21 04:59:30.998322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.528 [2024-11-21 04:59:30.998330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [2024-11-21 04:59:31.015374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.528 BaseBdev1 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [ 00:13:14.528 { 00:13:14.528 "name": "BaseBdev1", 00:13:14.528 "aliases": [ 00:13:14.528 "1f859d15-698c-42bb-9880-bc098d76af5e" 00:13:14.528 ], 00:13:14.528 "product_name": "Malloc disk", 00:13:14.528 "block_size": 512, 00:13:14.528 "num_blocks": 65536, 00:13:14.528 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:14.528 "assigned_rate_limits": { 00:13:14.528 "rw_ios_per_sec": 0, 00:13:14.528 "rw_mbytes_per_sec": 0, 00:13:14.528 "r_mbytes_per_sec": 0, 00:13:14.528 "w_mbytes_per_sec": 0 00:13:14.528 }, 00:13:14.528 "claimed": true, 00:13:14.528 "claim_type": "exclusive_write", 00:13:14.528 "zoned": false, 00:13:14.528 "supported_io_types": { 00:13:14.528 "read": true, 00:13:14.528 "write": true, 00:13:14.528 "unmap": true, 00:13:14.528 "flush": true, 00:13:14.528 "reset": true, 00:13:14.528 "nvme_admin": false, 00:13:14.528 "nvme_io": false, 00:13:14.528 "nvme_io_md": false, 00:13:14.528 "write_zeroes": true, 00:13:14.528 "zcopy": true, 00:13:14.528 "get_zone_info": false, 00:13:14.528 "zone_management": false, 00:13:14.528 "zone_append": false, 00:13:14.528 "compare": false, 00:13:14.528 "compare_and_write": false, 00:13:14.528 "abort": true, 00:13:14.528 "seek_hole": false, 00:13:14.528 "seek_data": false, 00:13:14.528 "copy": true, 00:13:14.528 "nvme_iov_md": false 00:13:14.528 }, 00:13:14.528 "memory_domains": [ 00:13:14.528 { 00:13:14.528 "dma_device_id": "system", 00:13:14.528 "dma_device_type": 1 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.528 "dma_device_type": 2 00:13:14.528 } 00:13:14.528 ], 00:13:14.528 "driver_specific": {} 00:13:14.528 } 00:13:14.528 ] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.528 "name": "Existed_Raid", 00:13:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.528 "strip_size_kb": 64, 00:13:14.528 "state": "configuring", 00:13:14.528 "raid_level": "raid5f", 00:13:14.528 "superblock": false, 00:13:14.528 "num_base_bdevs": 3, 00:13:14.528 "num_base_bdevs_discovered": 1, 00:13:14.528 "num_base_bdevs_operational": 3, 00:13:14.528 "base_bdevs_list": [ 00:13:14.528 { 00:13:14.528 "name": "BaseBdev1", 00:13:14.528 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:14.528 "is_configured": true, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 65536 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "name": "BaseBdev2", 00:13:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.528 "is_configured": false, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 0 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "name": "BaseBdev3", 00:13:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.528 "is_configured": false, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 0 00:13:14.528 } 00:13:14.528 ] 00:13:14.528 }' 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.528 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.789 [2024-11-21 04:59:31.442700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.789 [2024-11-21 04:59:31.442829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.789 [2024-11-21 04:59:31.454725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.789 [2024-11-21 04:59:31.456737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.789 [2024-11-21 04:59:31.456818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.789 [2024-11-21 04:59:31.456876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.789 [2024-11-21 04:59:31.456903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.789 "name": "Existed_Raid", 00:13:14.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.789 "strip_size_kb": 64, 00:13:14.789 "state": "configuring", 00:13:14.789 "raid_level": "raid5f", 00:13:14.789 "superblock": false, 00:13:14.789 "num_base_bdevs": 3, 00:13:14.789 "num_base_bdevs_discovered": 1, 00:13:14.789 "num_base_bdevs_operational": 3, 00:13:14.789 "base_bdevs_list": [ 00:13:14.789 { 00:13:14.789 "name": "BaseBdev1", 00:13:14.789 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:14.789 "is_configured": true, 00:13:14.789 "data_offset": 0, 00:13:14.789 "data_size": 65536 00:13:14.789 }, 00:13:14.789 { 00:13:14.789 "name": "BaseBdev2", 00:13:14.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.789 "is_configured": false, 00:13:14.789 "data_offset": 0, 00:13:14.789 "data_size": 0 00:13:14.789 }, 00:13:14.789 { 00:13:14.789 "name": "BaseBdev3", 00:13:14.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.789 "is_configured": false, 00:13:14.789 "data_offset": 0, 00:13:14.789 "data_size": 0 00:13:14.789 } 00:13:14.789 ] 00:13:14.789 }' 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.789 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.358 [2024-11-21 04:59:31.945433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.358 BaseBdev2 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.358 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.359 [ 00:13:15.359 { 00:13:15.359 "name": "BaseBdev2", 00:13:15.359 "aliases": [ 00:13:15.359 "86f1e97c-a83b-4449-afc4-a671945ea37c" 00:13:15.359 ], 00:13:15.359 "product_name": "Malloc disk", 00:13:15.359 "block_size": 512, 00:13:15.359 "num_blocks": 65536, 00:13:15.359 "uuid": "86f1e97c-a83b-4449-afc4-a671945ea37c", 00:13:15.359 "assigned_rate_limits": { 00:13:15.359 "rw_ios_per_sec": 0, 00:13:15.359 "rw_mbytes_per_sec": 0, 00:13:15.359 "r_mbytes_per_sec": 0, 00:13:15.359 "w_mbytes_per_sec": 0 00:13:15.359 }, 00:13:15.359 "claimed": true, 00:13:15.359 "claim_type": "exclusive_write", 00:13:15.359 "zoned": false, 00:13:15.359 "supported_io_types": { 00:13:15.359 "read": true, 00:13:15.359 "write": true, 00:13:15.359 "unmap": true, 00:13:15.359 "flush": true, 00:13:15.359 "reset": true, 00:13:15.359 "nvme_admin": false, 00:13:15.359 "nvme_io": false, 00:13:15.359 "nvme_io_md": false, 00:13:15.359 "write_zeroes": true, 00:13:15.359 "zcopy": true, 00:13:15.359 "get_zone_info": false, 00:13:15.359 "zone_management": false, 00:13:15.359 "zone_append": false, 00:13:15.359 "compare": false, 00:13:15.359 "compare_and_write": false, 00:13:15.359 "abort": true, 00:13:15.359 "seek_hole": false, 00:13:15.359 "seek_data": false, 00:13:15.359 "copy": true, 00:13:15.359 "nvme_iov_md": false 00:13:15.359 }, 00:13:15.359 "memory_domains": [ 00:13:15.359 { 00:13:15.359 "dma_device_id": "system", 00:13:15.359 "dma_device_type": 1 00:13:15.359 }, 00:13:15.359 { 00:13:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.359 "dma_device_type": 2 00:13:15.359 } 00:13:15.359 ], 00:13:15.359 "driver_specific": {} 00:13:15.359 } 00:13:15.359 ] 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.359 04:59:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.359 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.359 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.359 "name": "Existed_Raid", 00:13:15.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.359 "strip_size_kb": 64, 00:13:15.359 "state": "configuring", 00:13:15.359 "raid_level": "raid5f", 00:13:15.359 "superblock": false, 00:13:15.359 "num_base_bdevs": 3, 00:13:15.359 "num_base_bdevs_discovered": 2, 00:13:15.359 "num_base_bdevs_operational": 3, 00:13:15.359 "base_bdevs_list": [ 00:13:15.359 { 00:13:15.359 "name": "BaseBdev1", 00:13:15.359 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:15.359 "is_configured": true, 00:13:15.359 "data_offset": 0, 00:13:15.359 "data_size": 65536 00:13:15.359 }, 00:13:15.359 { 00:13:15.359 "name": "BaseBdev2", 00:13:15.359 "uuid": "86f1e97c-a83b-4449-afc4-a671945ea37c", 00:13:15.359 "is_configured": true, 00:13:15.359 "data_offset": 0, 00:13:15.359 "data_size": 65536 00:13:15.359 }, 00:13:15.359 { 00:13:15.359 "name": "BaseBdev3", 00:13:15.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.359 "is_configured": false, 00:13:15.359 "data_offset": 0, 00:13:15.359 "data_size": 0 00:13:15.359 } 00:13:15.359 ] 00:13:15.359 }' 00:13:15.359 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.359 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 [2024-11-21 04:59:32.446549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.928 [2024-11-21 04:59:32.446713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:15.928 [2024-11-21 04:59:32.446754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:15.928 [2024-11-21 04:59:32.447190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:15.928 [2024-11-21 04:59:32.447835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:15.928 [2024-11-21 04:59:32.447894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:15.928 [2024-11-21 04:59:32.448252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.928 BaseBdev3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 [ 00:13:15.928 { 00:13:15.928 "name": "BaseBdev3", 00:13:15.928 "aliases": [ 00:13:15.928 "1e0c8d03-708a-4deb-be7b-52dedf77ee85" 00:13:15.928 ], 00:13:15.928 "product_name": "Malloc disk", 00:13:15.928 "block_size": 512, 00:13:15.928 "num_blocks": 65536, 00:13:15.928 "uuid": "1e0c8d03-708a-4deb-be7b-52dedf77ee85", 00:13:15.928 "assigned_rate_limits": { 00:13:15.928 "rw_ios_per_sec": 0, 00:13:15.928 "rw_mbytes_per_sec": 0, 00:13:15.928 "r_mbytes_per_sec": 0, 00:13:15.928 "w_mbytes_per_sec": 0 00:13:15.928 }, 00:13:15.928 "claimed": true, 00:13:15.928 "claim_type": "exclusive_write", 00:13:15.928 "zoned": false, 00:13:15.928 "supported_io_types": { 00:13:15.928 "read": true, 00:13:15.928 "write": true, 00:13:15.928 "unmap": true, 00:13:15.928 "flush": true, 00:13:15.928 "reset": true, 00:13:15.928 "nvme_admin": false, 00:13:15.928 "nvme_io": false, 00:13:15.928 "nvme_io_md": false, 00:13:15.928 "write_zeroes": true, 00:13:15.928 "zcopy": true, 00:13:15.928 "get_zone_info": false, 00:13:15.928 "zone_management": false, 00:13:15.928 "zone_append": false, 00:13:15.928 "compare": false, 00:13:15.928 "compare_and_write": false, 00:13:15.928 "abort": true, 00:13:15.928 "seek_hole": false, 00:13:15.928 "seek_data": false, 00:13:15.928 "copy": true, 00:13:15.928 "nvme_iov_md": false 00:13:15.928 }, 00:13:15.928 "memory_domains": [ 00:13:15.928 { 00:13:15.928 "dma_device_id": "system", 00:13:15.928 "dma_device_type": 1 00:13:15.928 }, 00:13:15.928 { 00:13:15.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.928 "dma_device_type": 2 00:13:15.928 } 00:13:15.928 ], 00:13:15.928 "driver_specific": {} 00:13:15.928 } 00:13:15.928 ] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.928 "name": "Existed_Raid", 00:13:15.928 "uuid": "38dae61f-08e1-418c-acfb-546ca9cb25ef", 00:13:15.928 "strip_size_kb": 64, 00:13:15.928 "state": "online", 00:13:15.928 "raid_level": "raid5f", 00:13:15.928 "superblock": false, 00:13:15.928 "num_base_bdevs": 3, 00:13:15.928 "num_base_bdevs_discovered": 3, 00:13:15.928 "num_base_bdevs_operational": 3, 00:13:15.928 "base_bdevs_list": [ 00:13:15.928 { 00:13:15.928 "name": "BaseBdev1", 00:13:15.928 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:15.928 "is_configured": true, 00:13:15.928 "data_offset": 0, 00:13:15.928 "data_size": 65536 00:13:15.928 }, 00:13:15.928 { 00:13:15.928 "name": "BaseBdev2", 00:13:15.928 "uuid": "86f1e97c-a83b-4449-afc4-a671945ea37c", 00:13:15.928 "is_configured": true, 00:13:15.928 "data_offset": 0, 00:13:15.928 "data_size": 65536 00:13:15.928 }, 00:13:15.928 { 00:13:15.928 "name": "BaseBdev3", 00:13:15.928 "uuid": "1e0c8d03-708a-4deb-be7b-52dedf77ee85", 00:13:15.928 "is_configured": true, 00:13:15.928 "data_offset": 0, 00:13:15.928 "data_size": 65536 00:13:15.928 } 00:13:15.928 ] 00:13:15.928 }' 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.928 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.499 [2024-11-21 04:59:32.965984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.499 04:59:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:16.499 "name": "Existed_Raid", 00:13:16.499 "aliases": [ 00:13:16.499 "38dae61f-08e1-418c-acfb-546ca9cb25ef" 00:13:16.499 ], 00:13:16.499 "product_name": "Raid Volume", 00:13:16.499 "block_size": 512, 00:13:16.499 "num_blocks": 131072, 00:13:16.499 "uuid": "38dae61f-08e1-418c-acfb-546ca9cb25ef", 00:13:16.499 "assigned_rate_limits": { 00:13:16.499 "rw_ios_per_sec": 0, 00:13:16.499 "rw_mbytes_per_sec": 0, 00:13:16.499 "r_mbytes_per_sec": 0, 00:13:16.499 "w_mbytes_per_sec": 0 00:13:16.499 }, 00:13:16.499 "claimed": false, 00:13:16.499 "zoned": false, 00:13:16.499 "supported_io_types": { 00:13:16.499 "read": true, 00:13:16.499 "write": true, 00:13:16.499 "unmap": false, 00:13:16.499 "flush": false, 00:13:16.499 "reset": true, 00:13:16.499 "nvme_admin": false, 00:13:16.499 "nvme_io": false, 00:13:16.499 "nvme_io_md": false, 00:13:16.499 "write_zeroes": true, 00:13:16.499 "zcopy": false, 00:13:16.499 "get_zone_info": false, 00:13:16.499 "zone_management": false, 00:13:16.499 "zone_append": false, 00:13:16.499 "compare": false, 00:13:16.499 "compare_and_write": false, 00:13:16.499 "abort": false, 00:13:16.499 "seek_hole": false, 00:13:16.499 "seek_data": false, 00:13:16.499 "copy": false, 00:13:16.499 "nvme_iov_md": false 00:13:16.499 }, 00:13:16.499 "driver_specific": { 00:13:16.499 "raid": { 00:13:16.499 "uuid": "38dae61f-08e1-418c-acfb-546ca9cb25ef", 00:13:16.499 "strip_size_kb": 64, 00:13:16.499 "state": "online", 00:13:16.499 "raid_level": "raid5f", 00:13:16.499 "superblock": false, 00:13:16.499 "num_base_bdevs": 3, 00:13:16.499 "num_base_bdevs_discovered": 3, 00:13:16.499 "num_base_bdevs_operational": 3, 00:13:16.499 "base_bdevs_list": [ 00:13:16.499 { 00:13:16.499 "name": "BaseBdev1", 00:13:16.499 "uuid": "1f859d15-698c-42bb-9880-bc098d76af5e", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 0, 00:13:16.499 "data_size": 65536 00:13:16.499 }, 00:13:16.499 { 00:13:16.499 "name": "BaseBdev2", 00:13:16.499 "uuid": "86f1e97c-a83b-4449-afc4-a671945ea37c", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 0, 00:13:16.499 "data_size": 65536 00:13:16.499 }, 00:13:16.499 { 00:13:16.499 "name": "BaseBdev3", 00:13:16.499 "uuid": "1e0c8d03-708a-4deb-be7b-52dedf77ee85", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 0, 00:13:16.499 "data_size": 65536 00:13:16.499 } 00:13:16.499 ] 00:13:16.499 } 00:13:16.499 } 00:13:16.499 }' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:16.499 BaseBdev2 00:13:16.499 BaseBdev3' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.499 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.760 [2024-11-21 04:59:33.261304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.760 "name": "Existed_Raid", 00:13:16.760 "uuid": "38dae61f-08e1-418c-acfb-546ca9cb25ef", 00:13:16.760 "strip_size_kb": 64, 00:13:16.760 "state": "online", 00:13:16.760 "raid_level": "raid5f", 00:13:16.760 "superblock": false, 00:13:16.760 "num_base_bdevs": 3, 00:13:16.760 "num_base_bdevs_discovered": 2, 00:13:16.760 "num_base_bdevs_operational": 2, 00:13:16.760 "base_bdevs_list": [ 00:13:16.760 { 00:13:16.760 "name": null, 00:13:16.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.760 "is_configured": false, 00:13:16.760 "data_offset": 0, 00:13:16.760 "data_size": 65536 00:13:16.760 }, 00:13:16.760 { 00:13:16.760 "name": "BaseBdev2", 00:13:16.760 "uuid": "86f1e97c-a83b-4449-afc4-a671945ea37c", 00:13:16.760 "is_configured": true, 00:13:16.760 "data_offset": 0, 00:13:16.760 "data_size": 65536 00:13:16.760 }, 00:13:16.760 { 00:13:16.760 "name": "BaseBdev3", 00:13:16.760 "uuid": "1e0c8d03-708a-4deb-be7b-52dedf77ee85", 00:13:16.760 "is_configured": true, 00:13:16.760 "data_offset": 0, 00:13:16.760 "data_size": 65536 00:13:16.760 } 00:13:16.760 ] 00:13:16.760 }' 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.760 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.020 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.280 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.280 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 [2024-11-21 04:59:33.784497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.281 [2024-11-21 04:59:33.785138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.281 [2024-11-21 04:59:33.812827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 [2024-11-21 04:59:33.872697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.281 [2024-11-21 04:59:33.872757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 BaseBdev2 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 [ 00:13:17.281 { 00:13:17.281 "name": "BaseBdev2", 00:13:17.281 "aliases": [ 00:13:17.281 "5a1004cd-695f-406e-9a44-aaee2010db1f" 00:13:17.281 ], 00:13:17.281 "product_name": "Malloc disk", 00:13:17.281 "block_size": 512, 00:13:17.281 "num_blocks": 65536, 00:13:17.281 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:17.281 "assigned_rate_limits": { 00:13:17.281 "rw_ios_per_sec": 0, 00:13:17.281 "rw_mbytes_per_sec": 0, 00:13:17.281 "r_mbytes_per_sec": 0, 00:13:17.281 "w_mbytes_per_sec": 0 00:13:17.281 }, 00:13:17.281 "claimed": false, 00:13:17.281 "zoned": false, 00:13:17.281 "supported_io_types": { 00:13:17.281 "read": true, 00:13:17.281 "write": true, 00:13:17.281 "unmap": true, 00:13:17.281 "flush": true, 00:13:17.281 "reset": true, 00:13:17.281 "nvme_admin": false, 00:13:17.281 "nvme_io": false, 00:13:17.281 "nvme_io_md": false, 00:13:17.281 "write_zeroes": true, 00:13:17.281 "zcopy": true, 00:13:17.281 "get_zone_info": false, 00:13:17.281 "zone_management": false, 00:13:17.281 "zone_append": false, 00:13:17.281 "compare": false, 00:13:17.281 "compare_and_write": false, 00:13:17.281 "abort": true, 00:13:17.281 "seek_hole": false, 00:13:17.281 "seek_data": false, 00:13:17.281 "copy": true, 00:13:17.281 "nvme_iov_md": false 00:13:17.281 }, 00:13:17.281 "memory_domains": [ 00:13:17.281 { 00:13:17.281 "dma_device_id": "system", 00:13:17.281 "dma_device_type": 1 00:13:17.281 }, 00:13:17.281 { 00:13:17.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.281 "dma_device_type": 2 00:13:17.281 } 00:13:17.281 ], 00:13:17.281 "driver_specific": {} 00:13:17.281 } 00:13:17.281 ] 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.281 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 BaseBdev3 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 [ 00:13:17.542 { 00:13:17.542 "name": "BaseBdev3", 00:13:17.542 "aliases": [ 00:13:17.542 "13236382-5e42-4b39-b4a0-fdde89f6e3da" 00:13:17.542 ], 00:13:17.542 "product_name": "Malloc disk", 00:13:17.542 "block_size": 512, 00:13:17.542 "num_blocks": 65536, 00:13:17.542 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:17.542 "assigned_rate_limits": { 00:13:17.542 "rw_ios_per_sec": 0, 00:13:17.542 "rw_mbytes_per_sec": 0, 00:13:17.542 "r_mbytes_per_sec": 0, 00:13:17.542 "w_mbytes_per_sec": 0 00:13:17.542 }, 00:13:17.542 "claimed": false, 00:13:17.542 "zoned": false, 00:13:17.542 "supported_io_types": { 00:13:17.542 "read": true, 00:13:17.542 "write": true, 00:13:17.542 "unmap": true, 00:13:17.542 "flush": true, 00:13:17.542 "reset": true, 00:13:17.542 "nvme_admin": false, 00:13:17.542 "nvme_io": false, 00:13:17.542 "nvme_io_md": false, 00:13:17.542 "write_zeroes": true, 00:13:17.542 "zcopy": true, 00:13:17.542 "get_zone_info": false, 00:13:17.542 "zone_management": false, 00:13:17.542 "zone_append": false, 00:13:17.542 "compare": false, 00:13:17.542 "compare_and_write": false, 00:13:17.542 "abort": true, 00:13:17.542 "seek_hole": false, 00:13:17.542 "seek_data": false, 00:13:17.542 "copy": true, 00:13:17.542 "nvme_iov_md": false 00:13:17.542 }, 00:13:17.542 "memory_domains": [ 00:13:17.542 { 00:13:17.542 "dma_device_id": "system", 00:13:17.542 "dma_device_type": 1 00:13:17.542 }, 00:13:17.542 { 00:13:17.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.542 "dma_device_type": 2 00:13:17.542 } 00:13:17.542 ], 00:13:17.542 "driver_specific": {} 00:13:17.542 } 00:13:17.542 ] 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.542 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.543 [2024-11-21 04:59:34.067982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.543 [2024-11-21 04:59:34.068142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.543 [2024-11-21 04:59:34.068204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.543 [2024-11-21 04:59:34.070418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.543 "name": "Existed_Raid", 00:13:17.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.543 "strip_size_kb": 64, 00:13:17.543 "state": "configuring", 00:13:17.543 "raid_level": "raid5f", 00:13:17.543 "superblock": false, 00:13:17.543 "num_base_bdevs": 3, 00:13:17.543 "num_base_bdevs_discovered": 2, 00:13:17.543 "num_base_bdevs_operational": 3, 00:13:17.543 "base_bdevs_list": [ 00:13:17.543 { 00:13:17.543 "name": "BaseBdev1", 00:13:17.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.543 "is_configured": false, 00:13:17.543 "data_offset": 0, 00:13:17.543 "data_size": 0 00:13:17.543 }, 00:13:17.543 { 00:13:17.543 "name": "BaseBdev2", 00:13:17.543 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:17.543 "is_configured": true, 00:13:17.543 "data_offset": 0, 00:13:17.543 "data_size": 65536 00:13:17.543 }, 00:13:17.543 { 00:13:17.543 "name": "BaseBdev3", 00:13:17.543 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:17.543 "is_configured": true, 00:13:17.543 "data_offset": 0, 00:13:17.543 "data_size": 65536 00:13:17.543 } 00:13:17.543 ] 00:13:17.543 }' 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.543 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.803 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:17.803 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.803 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.803 [2024-11-21 04:59:34.531186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.063 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.063 "name": "Existed_Raid", 00:13:18.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.063 "strip_size_kb": 64, 00:13:18.063 "state": "configuring", 00:13:18.063 "raid_level": "raid5f", 00:13:18.063 "superblock": false, 00:13:18.063 "num_base_bdevs": 3, 00:13:18.063 "num_base_bdevs_discovered": 1, 00:13:18.063 "num_base_bdevs_operational": 3, 00:13:18.063 "base_bdevs_list": [ 00:13:18.063 { 00:13:18.063 "name": "BaseBdev1", 00:13:18.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.063 "is_configured": false, 00:13:18.063 "data_offset": 0, 00:13:18.063 "data_size": 0 00:13:18.063 }, 00:13:18.063 { 00:13:18.063 "name": null, 00:13:18.063 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:18.063 "is_configured": false, 00:13:18.063 "data_offset": 0, 00:13:18.063 "data_size": 65536 00:13:18.063 }, 00:13:18.063 { 00:13:18.063 "name": "BaseBdev3", 00:13:18.063 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:18.063 "is_configured": true, 00:13:18.063 "data_offset": 0, 00:13:18.063 "data_size": 65536 00:13:18.063 } 00:13:18.064 ] 00:13:18.064 }' 00:13:18.064 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.064 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.324 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.324 04:59:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.324 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.324 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.324 04:59:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.324 [2024-11-21 04:59:35.047128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.324 BaseBdev1 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.324 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.586 [ 00:13:18.586 { 00:13:18.586 "name": "BaseBdev1", 00:13:18.586 "aliases": [ 00:13:18.586 "2744f270-2786-4799-91fd-9396ea84dd5a" 00:13:18.586 ], 00:13:18.586 "product_name": "Malloc disk", 00:13:18.586 "block_size": 512, 00:13:18.586 "num_blocks": 65536, 00:13:18.586 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:18.586 "assigned_rate_limits": { 00:13:18.586 "rw_ios_per_sec": 0, 00:13:18.586 "rw_mbytes_per_sec": 0, 00:13:18.586 "r_mbytes_per_sec": 0, 00:13:18.586 "w_mbytes_per_sec": 0 00:13:18.586 }, 00:13:18.586 "claimed": true, 00:13:18.586 "claim_type": "exclusive_write", 00:13:18.586 "zoned": false, 00:13:18.586 "supported_io_types": { 00:13:18.586 "read": true, 00:13:18.586 "write": true, 00:13:18.586 "unmap": true, 00:13:18.586 "flush": true, 00:13:18.586 "reset": true, 00:13:18.586 "nvme_admin": false, 00:13:18.586 "nvme_io": false, 00:13:18.586 "nvme_io_md": false, 00:13:18.586 "write_zeroes": true, 00:13:18.586 "zcopy": true, 00:13:18.586 "get_zone_info": false, 00:13:18.586 "zone_management": false, 00:13:18.586 "zone_append": false, 00:13:18.586 "compare": false, 00:13:18.586 "compare_and_write": false, 00:13:18.586 "abort": true, 00:13:18.586 "seek_hole": false, 00:13:18.586 "seek_data": false, 00:13:18.586 "copy": true, 00:13:18.586 "nvme_iov_md": false 00:13:18.586 }, 00:13:18.586 "memory_domains": [ 00:13:18.586 { 00:13:18.586 "dma_device_id": "system", 00:13:18.586 "dma_device_type": 1 00:13:18.586 }, 00:13:18.586 { 00:13:18.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.586 "dma_device_type": 2 00:13:18.586 } 00:13:18.586 ], 00:13:18.586 "driver_specific": {} 00:13:18.586 } 00:13:18.586 ] 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.586 "name": "Existed_Raid", 00:13:18.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.586 "strip_size_kb": 64, 00:13:18.586 "state": "configuring", 00:13:18.586 "raid_level": "raid5f", 00:13:18.586 "superblock": false, 00:13:18.586 "num_base_bdevs": 3, 00:13:18.586 "num_base_bdevs_discovered": 2, 00:13:18.586 "num_base_bdevs_operational": 3, 00:13:18.586 "base_bdevs_list": [ 00:13:18.586 { 00:13:18.586 "name": "BaseBdev1", 00:13:18.586 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:18.586 "is_configured": true, 00:13:18.586 "data_offset": 0, 00:13:18.586 "data_size": 65536 00:13:18.586 }, 00:13:18.586 { 00:13:18.586 "name": null, 00:13:18.586 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:18.586 "is_configured": false, 00:13:18.586 "data_offset": 0, 00:13:18.586 "data_size": 65536 00:13:18.586 }, 00:13:18.586 { 00:13:18.586 "name": "BaseBdev3", 00:13:18.586 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:18.586 "is_configured": true, 00:13:18.586 "data_offset": 0, 00:13:18.586 "data_size": 65536 00:13:18.586 } 00:13:18.586 ] 00:13:18.586 }' 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.586 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.848 [2024-11-21 04:59:35.546347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.848 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.109 "name": "Existed_Raid", 00:13:19.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.109 "strip_size_kb": 64, 00:13:19.109 "state": "configuring", 00:13:19.109 "raid_level": "raid5f", 00:13:19.109 "superblock": false, 00:13:19.109 "num_base_bdevs": 3, 00:13:19.109 "num_base_bdevs_discovered": 1, 00:13:19.109 "num_base_bdevs_operational": 3, 00:13:19.109 "base_bdevs_list": [ 00:13:19.109 { 00:13:19.109 "name": "BaseBdev1", 00:13:19.109 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:19.109 "is_configured": true, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": null, 00:13:19.109 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:19.109 "is_configured": false, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": null, 00:13:19.109 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:19.109 "is_configured": false, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 } 00:13:19.109 ] 00:13:19.109 }' 00:13:19.109 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.109 04:59:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.370 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.370 04:59:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.370 [2024-11-21 04:59:36.033647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.370 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.370 "name": "Existed_Raid", 00:13:19.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.370 "strip_size_kb": 64, 00:13:19.370 "state": "configuring", 00:13:19.370 "raid_level": "raid5f", 00:13:19.370 "superblock": false, 00:13:19.370 "num_base_bdevs": 3, 00:13:19.370 "num_base_bdevs_discovered": 2, 00:13:19.370 "num_base_bdevs_operational": 3, 00:13:19.370 "base_bdevs_list": [ 00:13:19.370 { 00:13:19.370 "name": "BaseBdev1", 00:13:19.370 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:19.370 "is_configured": true, 00:13:19.370 "data_offset": 0, 00:13:19.370 "data_size": 65536 00:13:19.370 }, 00:13:19.370 { 00:13:19.370 "name": null, 00:13:19.370 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:19.370 "is_configured": false, 00:13:19.370 "data_offset": 0, 00:13:19.370 "data_size": 65536 00:13:19.370 }, 00:13:19.370 { 00:13:19.370 "name": "BaseBdev3", 00:13:19.370 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:19.370 "is_configured": true, 00:13:19.370 "data_offset": 0, 00:13:19.370 "data_size": 65536 00:13:19.370 } 00:13:19.371 ] 00:13:19.371 }' 00:13:19.371 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.371 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 [2024-11-21 04:59:36.532915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.941 "name": "Existed_Raid", 00:13:19.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.941 "strip_size_kb": 64, 00:13:19.941 "state": "configuring", 00:13:19.941 "raid_level": "raid5f", 00:13:19.941 "superblock": false, 00:13:19.941 "num_base_bdevs": 3, 00:13:19.941 "num_base_bdevs_discovered": 1, 00:13:19.941 "num_base_bdevs_operational": 3, 00:13:19.941 "base_bdevs_list": [ 00:13:19.941 { 00:13:19.941 "name": null, 00:13:19.941 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:19.941 "is_configured": false, 00:13:19.941 "data_offset": 0, 00:13:19.941 "data_size": 65536 00:13:19.941 }, 00:13:19.941 { 00:13:19.941 "name": null, 00:13:19.941 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:19.941 "is_configured": false, 00:13:19.941 "data_offset": 0, 00:13:19.941 "data_size": 65536 00:13:19.941 }, 00:13:19.941 { 00:13:19.941 "name": "BaseBdev3", 00:13:19.941 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:19.941 "is_configured": true, 00:13:19.941 "data_offset": 0, 00:13:19.941 "data_size": 65536 00:13:19.941 } 00:13:19.941 ] 00:13:19.941 }' 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.941 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.512 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:20.512 04:59:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.512 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.512 04:59:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.512 [2024-11-21 04:59:37.028329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.512 "name": "Existed_Raid", 00:13:20.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.512 "strip_size_kb": 64, 00:13:20.512 "state": "configuring", 00:13:20.512 "raid_level": "raid5f", 00:13:20.512 "superblock": false, 00:13:20.512 "num_base_bdevs": 3, 00:13:20.512 "num_base_bdevs_discovered": 2, 00:13:20.512 "num_base_bdevs_operational": 3, 00:13:20.512 "base_bdevs_list": [ 00:13:20.512 { 00:13:20.512 "name": null, 00:13:20.512 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:20.512 "is_configured": false, 00:13:20.512 "data_offset": 0, 00:13:20.512 "data_size": 65536 00:13:20.512 }, 00:13:20.512 { 00:13:20.512 "name": "BaseBdev2", 00:13:20.512 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:20.512 "is_configured": true, 00:13:20.512 "data_offset": 0, 00:13:20.512 "data_size": 65536 00:13:20.512 }, 00:13:20.512 { 00:13:20.512 "name": "BaseBdev3", 00:13:20.512 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:20.512 "is_configured": true, 00:13:20.512 "data_offset": 0, 00:13:20.512 "data_size": 65536 00:13:20.512 } 00:13:20.512 ] 00:13:20.512 }' 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.512 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.772 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2744f270-2786-4799-91fd-9396ea84dd5a 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.032 [2024-11-21 04:59:37.556170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:21.032 [2024-11-21 04:59:37.556311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:21.032 [2024-11-21 04:59:37.556331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:21.032 [2024-11-21 04:59:37.556669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:21.032 [2024-11-21 04:59:37.557174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:21.032 [2024-11-21 04:59:37.557189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:21.032 [2024-11-21 04:59:37.557433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.032 NewBaseBdev 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.032 [ 00:13:21.032 { 00:13:21.032 "name": "NewBaseBdev", 00:13:21.032 "aliases": [ 00:13:21.032 "2744f270-2786-4799-91fd-9396ea84dd5a" 00:13:21.032 ], 00:13:21.032 "product_name": "Malloc disk", 00:13:21.032 "block_size": 512, 00:13:21.032 "num_blocks": 65536, 00:13:21.032 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:21.032 "assigned_rate_limits": { 00:13:21.032 "rw_ios_per_sec": 0, 00:13:21.032 "rw_mbytes_per_sec": 0, 00:13:21.032 "r_mbytes_per_sec": 0, 00:13:21.032 "w_mbytes_per_sec": 0 00:13:21.032 }, 00:13:21.032 "claimed": true, 00:13:21.032 "claim_type": "exclusive_write", 00:13:21.032 "zoned": false, 00:13:21.032 "supported_io_types": { 00:13:21.032 "read": true, 00:13:21.032 "write": true, 00:13:21.032 "unmap": true, 00:13:21.032 "flush": true, 00:13:21.032 "reset": true, 00:13:21.032 "nvme_admin": false, 00:13:21.032 "nvme_io": false, 00:13:21.032 "nvme_io_md": false, 00:13:21.032 "write_zeroes": true, 00:13:21.032 "zcopy": true, 00:13:21.032 "get_zone_info": false, 00:13:21.032 "zone_management": false, 00:13:21.032 "zone_append": false, 00:13:21.032 "compare": false, 00:13:21.032 "compare_and_write": false, 00:13:21.032 "abort": true, 00:13:21.032 "seek_hole": false, 00:13:21.032 "seek_data": false, 00:13:21.032 "copy": true, 00:13:21.032 "nvme_iov_md": false 00:13:21.032 }, 00:13:21.032 "memory_domains": [ 00:13:21.032 { 00:13:21.032 "dma_device_id": "system", 00:13:21.032 "dma_device_type": 1 00:13:21.032 }, 00:13:21.032 { 00:13:21.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.032 "dma_device_type": 2 00:13:21.032 } 00:13:21.032 ], 00:13:21.032 "driver_specific": {} 00:13:21.032 } 00:13:21.032 ] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.032 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.033 "name": "Existed_Raid", 00:13:21.033 "uuid": "1c6ffe83-f142-43d6-862c-ae84f56178a5", 00:13:21.033 "strip_size_kb": 64, 00:13:21.033 "state": "online", 00:13:21.033 "raid_level": "raid5f", 00:13:21.033 "superblock": false, 00:13:21.033 "num_base_bdevs": 3, 00:13:21.033 "num_base_bdevs_discovered": 3, 00:13:21.033 "num_base_bdevs_operational": 3, 00:13:21.033 "base_bdevs_list": [ 00:13:21.033 { 00:13:21.033 "name": "NewBaseBdev", 00:13:21.033 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:21.033 "is_configured": true, 00:13:21.033 "data_offset": 0, 00:13:21.033 "data_size": 65536 00:13:21.033 }, 00:13:21.033 { 00:13:21.033 "name": "BaseBdev2", 00:13:21.033 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:21.033 "is_configured": true, 00:13:21.033 "data_offset": 0, 00:13:21.033 "data_size": 65536 00:13:21.033 }, 00:13:21.033 { 00:13:21.033 "name": "BaseBdev3", 00:13:21.033 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:21.033 "is_configured": true, 00:13:21.033 "data_offset": 0, 00:13:21.033 "data_size": 65536 00:13:21.033 } 00:13:21.033 ] 00:13:21.033 }' 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.033 04:59:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.603 [2024-11-21 04:59:38.063572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.603 "name": "Existed_Raid", 00:13:21.603 "aliases": [ 00:13:21.603 "1c6ffe83-f142-43d6-862c-ae84f56178a5" 00:13:21.603 ], 00:13:21.603 "product_name": "Raid Volume", 00:13:21.603 "block_size": 512, 00:13:21.603 "num_blocks": 131072, 00:13:21.603 "uuid": "1c6ffe83-f142-43d6-862c-ae84f56178a5", 00:13:21.603 "assigned_rate_limits": { 00:13:21.603 "rw_ios_per_sec": 0, 00:13:21.603 "rw_mbytes_per_sec": 0, 00:13:21.603 "r_mbytes_per_sec": 0, 00:13:21.603 "w_mbytes_per_sec": 0 00:13:21.603 }, 00:13:21.603 "claimed": false, 00:13:21.603 "zoned": false, 00:13:21.603 "supported_io_types": { 00:13:21.603 "read": true, 00:13:21.603 "write": true, 00:13:21.603 "unmap": false, 00:13:21.603 "flush": false, 00:13:21.603 "reset": true, 00:13:21.603 "nvme_admin": false, 00:13:21.603 "nvme_io": false, 00:13:21.603 "nvme_io_md": false, 00:13:21.603 "write_zeroes": true, 00:13:21.603 "zcopy": false, 00:13:21.603 "get_zone_info": false, 00:13:21.603 "zone_management": false, 00:13:21.603 "zone_append": false, 00:13:21.603 "compare": false, 00:13:21.603 "compare_and_write": false, 00:13:21.603 "abort": false, 00:13:21.603 "seek_hole": false, 00:13:21.603 "seek_data": false, 00:13:21.603 "copy": false, 00:13:21.603 "nvme_iov_md": false 00:13:21.603 }, 00:13:21.603 "driver_specific": { 00:13:21.603 "raid": { 00:13:21.603 "uuid": "1c6ffe83-f142-43d6-862c-ae84f56178a5", 00:13:21.603 "strip_size_kb": 64, 00:13:21.603 "state": "online", 00:13:21.603 "raid_level": "raid5f", 00:13:21.603 "superblock": false, 00:13:21.603 "num_base_bdevs": 3, 00:13:21.603 "num_base_bdevs_discovered": 3, 00:13:21.603 "num_base_bdevs_operational": 3, 00:13:21.603 "base_bdevs_list": [ 00:13:21.603 { 00:13:21.603 "name": "NewBaseBdev", 00:13:21.603 "uuid": "2744f270-2786-4799-91fd-9396ea84dd5a", 00:13:21.603 "is_configured": true, 00:13:21.603 "data_offset": 0, 00:13:21.603 "data_size": 65536 00:13:21.603 }, 00:13:21.603 { 00:13:21.603 "name": "BaseBdev2", 00:13:21.603 "uuid": "5a1004cd-695f-406e-9a44-aaee2010db1f", 00:13:21.603 "is_configured": true, 00:13:21.603 "data_offset": 0, 00:13:21.603 "data_size": 65536 00:13:21.603 }, 00:13:21.603 { 00:13:21.603 "name": "BaseBdev3", 00:13:21.603 "uuid": "13236382-5e42-4b39-b4a0-fdde89f6e3da", 00:13:21.603 "is_configured": true, 00:13:21.603 "data_offset": 0, 00:13:21.603 "data_size": 65536 00:13:21.603 } 00:13:21.603 ] 00:13:21.603 } 00:13:21.603 } 00:13:21.603 }' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:21.603 BaseBdev2 00:13:21.603 BaseBdev3' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.603 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-11-21 04:59:38.342933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.863 [2024-11-21 04:59:38.343011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.863 [2024-11-21 04:59:38.343141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.863 [2024-11-21 04:59:38.343438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.863 [2024-11-21 04:59:38.343457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90581 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 90581 ']' 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 90581 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90581 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.863 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.864 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90581' 00:13:21.864 killing process with pid 90581 00:13:21.864 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 90581 00:13:21.864 [2024-11-21 04:59:38.390903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.864 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 90581 00:13:21.864 [2024-11-21 04:59:38.450947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.124 04:59:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.124 ************************************ 00:13:22.124 END TEST raid5f_state_function_test 00:13:22.124 ************************************ 00:13:22.124 00:13:22.124 real 0m9.157s 00:13:22.124 user 0m15.443s 00:13:22.124 sys 0m1.914s 00:13:22.124 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.124 04:59:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.124 04:59:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:22.124 04:59:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:22.124 04:59:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.124 04:59:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.124 ************************************ 00:13:22.124 START TEST raid5f_state_function_test_sb 00:13:22.124 ************************************ 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91187 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91187' 00:13:22.384 Process raid pid: 91187 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91187 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 91187 ']' 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.384 04:59:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 [2024-11-21 04:59:38.947118] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:22.384 [2024-11-21 04:59:38.947313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.652 [2024-11-21 04:59:39.117454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.652 [2024-11-21 04:59:39.156202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.652 [2024-11-21 04:59:39.231979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.652 [2024-11-21 04:59:39.232162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.228 [2024-11-21 04:59:39.768163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.228 [2024-11-21 04:59:39.768295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.228 [2024-11-21 04:59:39.768313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.228 [2024-11-21 04:59:39.768328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.228 [2024-11-21 04:59:39.768340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.228 [2024-11-21 04:59:39.768372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.228 "name": "Existed_Raid", 00:13:23.228 "uuid": "71217c59-a4b2-441f-bb6e-0bf9e59644cd", 00:13:23.228 "strip_size_kb": 64, 00:13:23.228 "state": "configuring", 00:13:23.228 "raid_level": "raid5f", 00:13:23.228 "superblock": true, 00:13:23.228 "num_base_bdevs": 3, 00:13:23.228 "num_base_bdevs_discovered": 0, 00:13:23.228 "num_base_bdevs_operational": 3, 00:13:23.228 "base_bdevs_list": [ 00:13:23.228 { 00:13:23.228 "name": "BaseBdev1", 00:13:23.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.228 "is_configured": false, 00:13:23.228 "data_offset": 0, 00:13:23.228 "data_size": 0 00:13:23.228 }, 00:13:23.228 { 00:13:23.228 "name": "BaseBdev2", 00:13:23.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.228 "is_configured": false, 00:13:23.228 "data_offset": 0, 00:13:23.228 "data_size": 0 00:13:23.228 }, 00:13:23.228 { 00:13:23.228 "name": "BaseBdev3", 00:13:23.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.228 "is_configured": false, 00:13:23.228 "data_offset": 0, 00:13:23.228 "data_size": 0 00:13:23.228 } 00:13:23.228 ] 00:13:23.228 }' 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.228 04:59:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.489 [2024-11-21 04:59:40.179427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.489 [2024-11-21 04:59:40.179522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.489 [2024-11-21 04:59:40.187436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.489 [2024-11-21 04:59:40.187524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.489 [2024-11-21 04:59:40.187575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.489 [2024-11-21 04:59:40.187604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.489 [2024-11-21 04:59:40.187637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.489 [2024-11-21 04:59:40.187687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.489 [2024-11-21 04:59:40.214486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.489 BaseBdev1 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.489 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.749 [ 00:13:23.749 { 00:13:23.749 "name": "BaseBdev1", 00:13:23.749 "aliases": [ 00:13:23.749 "4685cb79-aace-4863-92d7-edfef0b30394" 00:13:23.749 ], 00:13:23.749 "product_name": "Malloc disk", 00:13:23.749 "block_size": 512, 00:13:23.749 "num_blocks": 65536, 00:13:23.749 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:23.749 "assigned_rate_limits": { 00:13:23.749 "rw_ios_per_sec": 0, 00:13:23.749 "rw_mbytes_per_sec": 0, 00:13:23.749 "r_mbytes_per_sec": 0, 00:13:23.749 "w_mbytes_per_sec": 0 00:13:23.749 }, 00:13:23.749 "claimed": true, 00:13:23.749 "claim_type": "exclusive_write", 00:13:23.749 "zoned": false, 00:13:23.749 "supported_io_types": { 00:13:23.749 "read": true, 00:13:23.749 "write": true, 00:13:23.749 "unmap": true, 00:13:23.749 "flush": true, 00:13:23.749 "reset": true, 00:13:23.749 "nvme_admin": false, 00:13:23.749 "nvme_io": false, 00:13:23.749 "nvme_io_md": false, 00:13:23.749 "write_zeroes": true, 00:13:23.749 "zcopy": true, 00:13:23.749 "get_zone_info": false, 00:13:23.749 "zone_management": false, 00:13:23.749 "zone_append": false, 00:13:23.749 "compare": false, 00:13:23.749 "compare_and_write": false, 00:13:23.749 "abort": true, 00:13:23.749 "seek_hole": false, 00:13:23.749 "seek_data": false, 00:13:23.749 "copy": true, 00:13:23.749 "nvme_iov_md": false 00:13:23.749 }, 00:13:23.749 "memory_domains": [ 00:13:23.749 { 00:13:23.749 "dma_device_id": "system", 00:13:23.749 "dma_device_type": 1 00:13:23.749 }, 00:13:23.749 { 00:13:23.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.749 "dma_device_type": 2 00:13:23.749 } 00:13:23.749 ], 00:13:23.749 "driver_specific": {} 00:13:23.749 } 00:13:23.749 ] 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.749 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.749 "name": "Existed_Raid", 00:13:23.749 "uuid": "9cbb8c75-a5ff-4203-8775-d9a7b26b32b0", 00:13:23.749 "strip_size_kb": 64, 00:13:23.749 "state": "configuring", 00:13:23.749 "raid_level": "raid5f", 00:13:23.749 "superblock": true, 00:13:23.749 "num_base_bdevs": 3, 00:13:23.749 "num_base_bdevs_discovered": 1, 00:13:23.749 "num_base_bdevs_operational": 3, 00:13:23.750 "base_bdevs_list": [ 00:13:23.750 { 00:13:23.750 "name": "BaseBdev1", 00:13:23.750 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:23.750 "is_configured": true, 00:13:23.750 "data_offset": 2048, 00:13:23.750 "data_size": 63488 00:13:23.750 }, 00:13:23.750 { 00:13:23.750 "name": "BaseBdev2", 00:13:23.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.750 "is_configured": false, 00:13:23.750 "data_offset": 0, 00:13:23.750 "data_size": 0 00:13:23.750 }, 00:13:23.750 { 00:13:23.750 "name": "BaseBdev3", 00:13:23.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.750 "is_configured": false, 00:13:23.750 "data_offset": 0, 00:13:23.750 "data_size": 0 00:13:23.750 } 00:13:23.750 ] 00:13:23.750 }' 00:13:23.750 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.750 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 [2024-11-21 04:59:40.697686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.010 [2024-11-21 04:59:40.697745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 [2024-11-21 04:59:40.709684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.010 [2024-11-21 04:59:40.711909] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.010 [2024-11-21 04:59:40.712011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.010 [2024-11-21 04:59:40.712043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:24.010 [2024-11-21 04:59:40.712058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.010 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.270 "name": "Existed_Raid", 00:13:24.270 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:24.270 "strip_size_kb": 64, 00:13:24.270 "state": "configuring", 00:13:24.270 "raid_level": "raid5f", 00:13:24.270 "superblock": true, 00:13:24.270 "num_base_bdevs": 3, 00:13:24.270 "num_base_bdevs_discovered": 1, 00:13:24.270 "num_base_bdevs_operational": 3, 00:13:24.270 "base_bdevs_list": [ 00:13:24.270 { 00:13:24.270 "name": "BaseBdev1", 00:13:24.270 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:24.270 "is_configured": true, 00:13:24.270 "data_offset": 2048, 00:13:24.270 "data_size": 63488 00:13:24.270 }, 00:13:24.270 { 00:13:24.270 "name": "BaseBdev2", 00:13:24.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.270 "is_configured": false, 00:13:24.270 "data_offset": 0, 00:13:24.270 "data_size": 0 00:13:24.270 }, 00:13:24.270 { 00:13:24.270 "name": "BaseBdev3", 00:13:24.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.270 "is_configured": false, 00:13:24.270 "data_offset": 0, 00:13:24.270 "data_size": 0 00:13:24.270 } 00:13:24.270 ] 00:13:24.270 }' 00:13:24.270 04:59:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.270 04:59:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 [2024-11-21 04:59:41.117946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.530 BaseBdev2 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 [ 00:13:24.530 { 00:13:24.530 "name": "BaseBdev2", 00:13:24.530 "aliases": [ 00:13:24.530 "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb" 00:13:24.530 ], 00:13:24.530 "product_name": "Malloc disk", 00:13:24.530 "block_size": 512, 00:13:24.530 "num_blocks": 65536, 00:13:24.530 "uuid": "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb", 00:13:24.530 "assigned_rate_limits": { 00:13:24.530 "rw_ios_per_sec": 0, 00:13:24.530 "rw_mbytes_per_sec": 0, 00:13:24.530 "r_mbytes_per_sec": 0, 00:13:24.530 "w_mbytes_per_sec": 0 00:13:24.530 }, 00:13:24.530 "claimed": true, 00:13:24.530 "claim_type": "exclusive_write", 00:13:24.530 "zoned": false, 00:13:24.530 "supported_io_types": { 00:13:24.530 "read": true, 00:13:24.530 "write": true, 00:13:24.530 "unmap": true, 00:13:24.530 "flush": true, 00:13:24.530 "reset": true, 00:13:24.530 "nvme_admin": false, 00:13:24.530 "nvme_io": false, 00:13:24.530 "nvme_io_md": false, 00:13:24.530 "write_zeroes": true, 00:13:24.530 "zcopy": true, 00:13:24.530 "get_zone_info": false, 00:13:24.530 "zone_management": false, 00:13:24.530 "zone_append": false, 00:13:24.530 "compare": false, 00:13:24.530 "compare_and_write": false, 00:13:24.530 "abort": true, 00:13:24.530 "seek_hole": false, 00:13:24.530 "seek_data": false, 00:13:24.530 "copy": true, 00:13:24.530 "nvme_iov_md": false 00:13:24.530 }, 00:13:24.530 "memory_domains": [ 00:13:24.530 { 00:13:24.530 "dma_device_id": "system", 00:13:24.530 "dma_device_type": 1 00:13:24.530 }, 00:13:24.530 { 00:13:24.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.530 "dma_device_type": 2 00:13:24.530 } 00:13:24.530 ], 00:13:24.530 "driver_specific": {} 00:13:24.530 } 00:13:24.530 ] 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:24.530 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.531 "name": "Existed_Raid", 00:13:24.531 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:24.531 "strip_size_kb": 64, 00:13:24.531 "state": "configuring", 00:13:24.531 "raid_level": "raid5f", 00:13:24.531 "superblock": true, 00:13:24.531 "num_base_bdevs": 3, 00:13:24.531 "num_base_bdevs_discovered": 2, 00:13:24.531 "num_base_bdevs_operational": 3, 00:13:24.531 "base_bdevs_list": [ 00:13:24.531 { 00:13:24.531 "name": "BaseBdev1", 00:13:24.531 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:24.531 "is_configured": true, 00:13:24.531 "data_offset": 2048, 00:13:24.531 "data_size": 63488 00:13:24.531 }, 00:13:24.531 { 00:13:24.531 "name": "BaseBdev2", 00:13:24.531 "uuid": "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb", 00:13:24.531 "is_configured": true, 00:13:24.531 "data_offset": 2048, 00:13:24.531 "data_size": 63488 00:13:24.531 }, 00:13:24.531 { 00:13:24.531 "name": "BaseBdev3", 00:13:24.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.531 "is_configured": false, 00:13:24.531 "data_offset": 0, 00:13:24.531 "data_size": 0 00:13:24.531 } 00:13:24.531 ] 00:13:24.531 }' 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.531 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 [2024-11-21 04:59:41.601019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.101 [2024-11-21 04:59:41.601772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:25.101 BaseBdev3 00:13:25.101 [2024-11-21 04:59:41.602034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 [2024-11-21 04:59:41.603287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 [2024-11-21 04:59:41.605261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:25.101 [2024-11-21 04:59:41.605318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 [2024-11-21 04:59:41.605876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 [ 00:13:25.101 { 00:13:25.101 "name": "BaseBdev3", 00:13:25.101 "aliases": [ 00:13:25.101 "36835d79-2b9b-4e3a-82a7-b7659cd7ae44" 00:13:25.101 ], 00:13:25.101 "product_name": "Malloc disk", 00:13:25.101 "block_size": 512, 00:13:25.101 "num_blocks": 65536, 00:13:25.101 "uuid": "36835d79-2b9b-4e3a-82a7-b7659cd7ae44", 00:13:25.101 "assigned_rate_limits": { 00:13:25.101 "rw_ios_per_sec": 0, 00:13:25.101 "rw_mbytes_per_sec": 0, 00:13:25.101 "r_mbytes_per_sec": 0, 00:13:25.101 "w_mbytes_per_sec": 0 00:13:25.101 }, 00:13:25.101 "claimed": true, 00:13:25.101 "claim_type": "exclusive_write", 00:13:25.101 "zoned": false, 00:13:25.101 "supported_io_types": { 00:13:25.101 "read": true, 00:13:25.101 "write": true, 00:13:25.101 "unmap": true, 00:13:25.101 "flush": true, 00:13:25.101 "reset": true, 00:13:25.101 "nvme_admin": false, 00:13:25.101 "nvme_io": false, 00:13:25.101 "nvme_io_md": false, 00:13:25.101 "write_zeroes": true, 00:13:25.101 "zcopy": true, 00:13:25.101 "get_zone_info": false, 00:13:25.101 "zone_management": false, 00:13:25.101 "zone_append": false, 00:13:25.101 "compare": false, 00:13:25.101 "compare_and_write": false, 00:13:25.101 "abort": true, 00:13:25.101 "seek_hole": false, 00:13:25.101 "seek_data": false, 00:13:25.101 "copy": true, 00:13:25.101 "nvme_iov_md": false 00:13:25.101 }, 00:13:25.101 "memory_domains": [ 00:13:25.101 { 00:13:25.101 "dma_device_id": "system", 00:13:25.101 "dma_device_type": 1 00:13:25.101 }, 00:13:25.101 { 00:13:25.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.101 "dma_device_type": 2 00:13:25.101 } 00:13:25.101 ], 00:13:25.101 "driver_specific": {} 00:13:25.101 } 00:13:25.101 ] 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.101 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.102 "name": "Existed_Raid", 00:13:25.102 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:25.102 "strip_size_kb": 64, 00:13:25.102 "state": "online", 00:13:25.102 "raid_level": "raid5f", 00:13:25.102 "superblock": true, 00:13:25.102 "num_base_bdevs": 3, 00:13:25.102 "num_base_bdevs_discovered": 3, 00:13:25.102 "num_base_bdevs_operational": 3, 00:13:25.102 "base_bdevs_list": [ 00:13:25.102 { 00:13:25.102 "name": "BaseBdev1", 00:13:25.102 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:25.102 "is_configured": true, 00:13:25.102 "data_offset": 2048, 00:13:25.102 "data_size": 63488 00:13:25.102 }, 00:13:25.102 { 00:13:25.102 "name": "BaseBdev2", 00:13:25.102 "uuid": "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb", 00:13:25.102 "is_configured": true, 00:13:25.102 "data_offset": 2048, 00:13:25.102 "data_size": 63488 00:13:25.102 }, 00:13:25.102 { 00:13:25.102 "name": "BaseBdev3", 00:13:25.102 "uuid": "36835d79-2b9b-4e3a-82a7-b7659cd7ae44", 00:13:25.102 "is_configured": true, 00:13:25.102 "data_offset": 2048, 00:13:25.102 "data_size": 63488 00:13:25.102 } 00:13:25.102 ] 00:13:25.102 }' 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.102 04:59:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.362 [2024-11-21 04:59:42.081252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.362 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.622 "name": "Existed_Raid", 00:13:25.622 "aliases": [ 00:13:25.622 "d30c73db-c7f1-476d-9535-ba2afd6f1509" 00:13:25.622 ], 00:13:25.622 "product_name": "Raid Volume", 00:13:25.622 "block_size": 512, 00:13:25.622 "num_blocks": 126976, 00:13:25.622 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:25.622 "assigned_rate_limits": { 00:13:25.622 "rw_ios_per_sec": 0, 00:13:25.622 "rw_mbytes_per_sec": 0, 00:13:25.622 "r_mbytes_per_sec": 0, 00:13:25.622 "w_mbytes_per_sec": 0 00:13:25.622 }, 00:13:25.622 "claimed": false, 00:13:25.622 "zoned": false, 00:13:25.622 "supported_io_types": { 00:13:25.622 "read": true, 00:13:25.622 "write": true, 00:13:25.622 "unmap": false, 00:13:25.622 "flush": false, 00:13:25.622 "reset": true, 00:13:25.622 "nvme_admin": false, 00:13:25.622 "nvme_io": false, 00:13:25.622 "nvme_io_md": false, 00:13:25.622 "write_zeroes": true, 00:13:25.622 "zcopy": false, 00:13:25.622 "get_zone_info": false, 00:13:25.622 "zone_management": false, 00:13:25.622 "zone_append": false, 00:13:25.622 "compare": false, 00:13:25.622 "compare_and_write": false, 00:13:25.622 "abort": false, 00:13:25.622 "seek_hole": false, 00:13:25.622 "seek_data": false, 00:13:25.622 "copy": false, 00:13:25.622 "nvme_iov_md": false 00:13:25.622 }, 00:13:25.622 "driver_specific": { 00:13:25.622 "raid": { 00:13:25.622 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:25.622 "strip_size_kb": 64, 00:13:25.622 "state": "online", 00:13:25.622 "raid_level": "raid5f", 00:13:25.622 "superblock": true, 00:13:25.622 "num_base_bdevs": 3, 00:13:25.622 "num_base_bdevs_discovered": 3, 00:13:25.622 "num_base_bdevs_operational": 3, 00:13:25.622 "base_bdevs_list": [ 00:13:25.622 { 00:13:25.622 "name": "BaseBdev1", 00:13:25.622 "uuid": "4685cb79-aace-4863-92d7-edfef0b30394", 00:13:25.622 "is_configured": true, 00:13:25.622 "data_offset": 2048, 00:13:25.622 "data_size": 63488 00:13:25.622 }, 00:13:25.622 { 00:13:25.622 "name": "BaseBdev2", 00:13:25.622 "uuid": "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb", 00:13:25.622 "is_configured": true, 00:13:25.622 "data_offset": 2048, 00:13:25.622 "data_size": 63488 00:13:25.622 }, 00:13:25.622 { 00:13:25.622 "name": "BaseBdev3", 00:13:25.622 "uuid": "36835d79-2b9b-4e3a-82a7-b7659cd7ae44", 00:13:25.622 "is_configured": true, 00:13:25.622 "data_offset": 2048, 00:13:25.622 "data_size": 63488 00:13:25.622 } 00:13:25.622 ] 00:13:25.622 } 00:13:25.622 } 00:13:25.622 }' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:25.622 BaseBdev2 00:13:25.622 BaseBdev3' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.622 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.622 [2024-11-21 04:59:42.340653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.882 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.883 "name": "Existed_Raid", 00:13:25.883 "uuid": "d30c73db-c7f1-476d-9535-ba2afd6f1509", 00:13:25.883 "strip_size_kb": 64, 00:13:25.883 "state": "online", 00:13:25.883 "raid_level": "raid5f", 00:13:25.883 "superblock": true, 00:13:25.883 "num_base_bdevs": 3, 00:13:25.883 "num_base_bdevs_discovered": 2, 00:13:25.883 "num_base_bdevs_operational": 2, 00:13:25.883 "base_bdevs_list": [ 00:13:25.883 { 00:13:25.883 "name": null, 00:13:25.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.883 "is_configured": false, 00:13:25.883 "data_offset": 0, 00:13:25.883 "data_size": 63488 00:13:25.883 }, 00:13:25.883 { 00:13:25.883 "name": "BaseBdev2", 00:13:25.883 "uuid": "51f07778-d0fe-4d38-8b5c-c6575fa0c6bb", 00:13:25.883 "is_configured": true, 00:13:25.883 "data_offset": 2048, 00:13:25.883 "data_size": 63488 00:13:25.883 }, 00:13:25.883 { 00:13:25.883 "name": "BaseBdev3", 00:13:25.883 "uuid": "36835d79-2b9b-4e3a-82a7-b7659cd7ae44", 00:13:25.883 "is_configured": true, 00:13:25.883 "data_offset": 2048, 00:13:25.883 "data_size": 63488 00:13:25.883 } 00:13:25.883 ] 00:13:25.883 }' 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.883 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.143 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 [2024-11-21 04:59:42.888849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.404 [2024-11-21 04:59:42.889020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.404 [2024-11-21 04:59:42.909664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 [2024-11-21 04:59:42.953570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.404 [2024-11-21 04:59:42.953625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 04:59:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 BaseBdev2 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.404 [ 00:13:26.404 { 00:13:26.404 "name": "BaseBdev2", 00:13:26.404 "aliases": [ 00:13:26.404 "d45a13b3-4d84-4f8f-b9bc-21382624d906" 00:13:26.404 ], 00:13:26.404 "product_name": "Malloc disk", 00:13:26.404 "block_size": 512, 00:13:26.404 "num_blocks": 65536, 00:13:26.404 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:26.404 "assigned_rate_limits": { 00:13:26.404 "rw_ios_per_sec": 0, 00:13:26.404 "rw_mbytes_per_sec": 0, 00:13:26.404 "r_mbytes_per_sec": 0, 00:13:26.404 "w_mbytes_per_sec": 0 00:13:26.404 }, 00:13:26.404 "claimed": false, 00:13:26.404 "zoned": false, 00:13:26.404 "supported_io_types": { 00:13:26.404 "read": true, 00:13:26.404 "write": true, 00:13:26.404 "unmap": true, 00:13:26.404 "flush": true, 00:13:26.404 "reset": true, 00:13:26.404 "nvme_admin": false, 00:13:26.404 "nvme_io": false, 00:13:26.404 "nvme_io_md": false, 00:13:26.404 "write_zeroes": true, 00:13:26.404 "zcopy": true, 00:13:26.404 "get_zone_info": false, 00:13:26.404 "zone_management": false, 00:13:26.404 "zone_append": false, 00:13:26.404 "compare": false, 00:13:26.404 "compare_and_write": false, 00:13:26.404 "abort": true, 00:13:26.404 "seek_hole": false, 00:13:26.404 "seek_data": false, 00:13:26.404 "copy": true, 00:13:26.404 "nvme_iov_md": false 00:13:26.404 }, 00:13:26.404 "memory_domains": [ 00:13:26.404 { 00:13:26.404 "dma_device_id": "system", 00:13:26.404 "dma_device_type": 1 00:13:26.404 }, 00:13:26.404 { 00:13:26.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.404 "dma_device_type": 2 00:13:26.404 } 00:13:26.404 ], 00:13:26.404 "driver_specific": {} 00:13:26.404 } 00:13:26.404 ] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.404 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.405 BaseBdev3 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.405 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.405 [ 00:13:26.405 { 00:13:26.405 "name": "BaseBdev3", 00:13:26.405 "aliases": [ 00:13:26.405 "2ad6f449-601d-43ff-9adc-26bd5d994dcb" 00:13:26.405 ], 00:13:26.405 "product_name": "Malloc disk", 00:13:26.405 "block_size": 512, 00:13:26.405 "num_blocks": 65536, 00:13:26.405 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:26.405 "assigned_rate_limits": { 00:13:26.405 "rw_ios_per_sec": 0, 00:13:26.405 "rw_mbytes_per_sec": 0, 00:13:26.405 "r_mbytes_per_sec": 0, 00:13:26.405 "w_mbytes_per_sec": 0 00:13:26.405 }, 00:13:26.405 "claimed": false, 00:13:26.405 "zoned": false, 00:13:26.405 "supported_io_types": { 00:13:26.405 "read": true, 00:13:26.405 "write": true, 00:13:26.405 "unmap": true, 00:13:26.405 "flush": true, 00:13:26.405 "reset": true, 00:13:26.405 "nvme_admin": false, 00:13:26.405 "nvme_io": false, 00:13:26.405 "nvme_io_md": false, 00:13:26.405 "write_zeroes": true, 00:13:26.405 "zcopy": true, 00:13:26.405 "get_zone_info": false, 00:13:26.405 "zone_management": false, 00:13:26.665 "zone_append": false, 00:13:26.665 "compare": false, 00:13:26.665 "compare_and_write": false, 00:13:26.665 "abort": true, 00:13:26.665 "seek_hole": false, 00:13:26.665 "seek_data": false, 00:13:26.665 "copy": true, 00:13:26.665 "nvme_iov_md": false 00:13:26.665 }, 00:13:26.665 "memory_domains": [ 00:13:26.665 { 00:13:26.665 "dma_device_id": "system", 00:13:26.665 "dma_device_type": 1 00:13:26.665 }, 00:13:26.665 { 00:13:26.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.665 "dma_device_type": 2 00:13:26.665 } 00:13:26.665 ], 00:13:26.665 "driver_specific": {} 00:13:26.665 } 00:13:26.665 ] 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 [2024-11-21 04:59:43.149298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.665 [2024-11-21 04:59:43.149440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.665 [2024-11-21 04:59:43.149509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.665 [2024-11-21 04:59:43.151708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.665 "name": "Existed_Raid", 00:13:26.665 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:26.665 "strip_size_kb": 64, 00:13:26.665 "state": "configuring", 00:13:26.665 "raid_level": "raid5f", 00:13:26.665 "superblock": true, 00:13:26.665 "num_base_bdevs": 3, 00:13:26.665 "num_base_bdevs_discovered": 2, 00:13:26.665 "num_base_bdevs_operational": 3, 00:13:26.665 "base_bdevs_list": [ 00:13:26.665 { 00:13:26.665 "name": "BaseBdev1", 00:13:26.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.665 "is_configured": false, 00:13:26.665 "data_offset": 0, 00:13:26.665 "data_size": 0 00:13:26.665 }, 00:13:26.665 { 00:13:26.665 "name": "BaseBdev2", 00:13:26.665 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:26.665 "is_configured": true, 00:13:26.665 "data_offset": 2048, 00:13:26.665 "data_size": 63488 00:13:26.665 }, 00:13:26.665 { 00:13:26.665 "name": "BaseBdev3", 00:13:26.665 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:26.665 "is_configured": true, 00:13:26.665 "data_offset": 2048, 00:13:26.665 "data_size": 63488 00:13:26.665 } 00:13:26.665 ] 00:13:26.665 }' 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.665 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.925 [2024-11-21 04:59:43.620459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.925 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.185 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.185 "name": "Existed_Raid", 00:13:27.185 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:27.185 "strip_size_kb": 64, 00:13:27.185 "state": "configuring", 00:13:27.185 "raid_level": "raid5f", 00:13:27.185 "superblock": true, 00:13:27.185 "num_base_bdevs": 3, 00:13:27.185 "num_base_bdevs_discovered": 1, 00:13:27.185 "num_base_bdevs_operational": 3, 00:13:27.185 "base_bdevs_list": [ 00:13:27.185 { 00:13:27.185 "name": "BaseBdev1", 00:13:27.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.185 "is_configured": false, 00:13:27.185 "data_offset": 0, 00:13:27.185 "data_size": 0 00:13:27.185 }, 00:13:27.185 { 00:13:27.185 "name": null, 00:13:27.185 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:27.185 "is_configured": false, 00:13:27.185 "data_offset": 0, 00:13:27.185 "data_size": 63488 00:13:27.185 }, 00:13:27.185 { 00:13:27.185 "name": "BaseBdev3", 00:13:27.185 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:27.185 "is_configured": true, 00:13:27.185 "data_offset": 2048, 00:13:27.185 "data_size": 63488 00:13:27.185 } 00:13:27.185 ] 00:13:27.185 }' 00:13:27.185 04:59:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.185 04:59:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.445 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.446 [2024-11-21 04:59:44.096540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.446 BaseBdev1 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.446 [ 00:13:27.446 { 00:13:27.446 "name": "BaseBdev1", 00:13:27.446 "aliases": [ 00:13:27.446 "d48fafbd-662a-4026-aafb-5322b08b8f60" 00:13:27.446 ], 00:13:27.446 "product_name": "Malloc disk", 00:13:27.446 "block_size": 512, 00:13:27.446 "num_blocks": 65536, 00:13:27.446 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:27.446 "assigned_rate_limits": { 00:13:27.446 "rw_ios_per_sec": 0, 00:13:27.446 "rw_mbytes_per_sec": 0, 00:13:27.446 "r_mbytes_per_sec": 0, 00:13:27.446 "w_mbytes_per_sec": 0 00:13:27.446 }, 00:13:27.446 "claimed": true, 00:13:27.446 "claim_type": "exclusive_write", 00:13:27.446 "zoned": false, 00:13:27.446 "supported_io_types": { 00:13:27.446 "read": true, 00:13:27.446 "write": true, 00:13:27.446 "unmap": true, 00:13:27.446 "flush": true, 00:13:27.446 "reset": true, 00:13:27.446 "nvme_admin": false, 00:13:27.446 "nvme_io": false, 00:13:27.446 "nvme_io_md": false, 00:13:27.446 "write_zeroes": true, 00:13:27.446 "zcopy": true, 00:13:27.446 "get_zone_info": false, 00:13:27.446 "zone_management": false, 00:13:27.446 "zone_append": false, 00:13:27.446 "compare": false, 00:13:27.446 "compare_and_write": false, 00:13:27.446 "abort": true, 00:13:27.446 "seek_hole": false, 00:13:27.446 "seek_data": false, 00:13:27.446 "copy": true, 00:13:27.446 "nvme_iov_md": false 00:13:27.446 }, 00:13:27.446 "memory_domains": [ 00:13:27.446 { 00:13:27.446 "dma_device_id": "system", 00:13:27.446 "dma_device_type": 1 00:13:27.446 }, 00:13:27.446 { 00:13:27.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.446 "dma_device_type": 2 00:13:27.446 } 00:13:27.446 ], 00:13:27.446 "driver_specific": {} 00:13:27.446 } 00:13:27.446 ] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.446 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.706 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.706 "name": "Existed_Raid", 00:13:27.706 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:27.706 "strip_size_kb": 64, 00:13:27.706 "state": "configuring", 00:13:27.706 "raid_level": "raid5f", 00:13:27.706 "superblock": true, 00:13:27.706 "num_base_bdevs": 3, 00:13:27.706 "num_base_bdevs_discovered": 2, 00:13:27.706 "num_base_bdevs_operational": 3, 00:13:27.706 "base_bdevs_list": [ 00:13:27.706 { 00:13:27.706 "name": "BaseBdev1", 00:13:27.706 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:27.706 "is_configured": true, 00:13:27.706 "data_offset": 2048, 00:13:27.706 "data_size": 63488 00:13:27.706 }, 00:13:27.706 { 00:13:27.706 "name": null, 00:13:27.706 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:27.706 "is_configured": false, 00:13:27.706 "data_offset": 0, 00:13:27.706 "data_size": 63488 00:13:27.706 }, 00:13:27.706 { 00:13:27.706 "name": "BaseBdev3", 00:13:27.706 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:27.706 "is_configured": true, 00:13:27.706 "data_offset": 2048, 00:13:27.706 "data_size": 63488 00:13:27.706 } 00:13:27.706 ] 00:13:27.706 }' 00:13:27.706 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.706 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.966 [2024-11-21 04:59:44.595815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.966 "name": "Existed_Raid", 00:13:27.966 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:27.966 "strip_size_kb": 64, 00:13:27.966 "state": "configuring", 00:13:27.966 "raid_level": "raid5f", 00:13:27.966 "superblock": true, 00:13:27.966 "num_base_bdevs": 3, 00:13:27.966 "num_base_bdevs_discovered": 1, 00:13:27.966 "num_base_bdevs_operational": 3, 00:13:27.966 "base_bdevs_list": [ 00:13:27.966 { 00:13:27.966 "name": "BaseBdev1", 00:13:27.966 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:27.966 "is_configured": true, 00:13:27.966 "data_offset": 2048, 00:13:27.966 "data_size": 63488 00:13:27.966 }, 00:13:27.966 { 00:13:27.966 "name": null, 00:13:27.966 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:27.966 "is_configured": false, 00:13:27.966 "data_offset": 0, 00:13:27.966 "data_size": 63488 00:13:27.966 }, 00:13:27.966 { 00:13:27.966 "name": null, 00:13:27.966 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:27.966 "is_configured": false, 00:13:27.966 "data_offset": 0, 00:13:27.966 "data_size": 63488 00:13:27.966 } 00:13:27.966 ] 00:13:27.966 }' 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.966 04:59:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.535 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.536 [2024-11-21 04:59:45.139249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.536 "name": "Existed_Raid", 00:13:28.536 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:28.536 "strip_size_kb": 64, 00:13:28.536 "state": "configuring", 00:13:28.536 "raid_level": "raid5f", 00:13:28.536 "superblock": true, 00:13:28.536 "num_base_bdevs": 3, 00:13:28.536 "num_base_bdevs_discovered": 2, 00:13:28.536 "num_base_bdevs_operational": 3, 00:13:28.536 "base_bdevs_list": [ 00:13:28.536 { 00:13:28.536 "name": "BaseBdev1", 00:13:28.536 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:28.536 "is_configured": true, 00:13:28.536 "data_offset": 2048, 00:13:28.536 "data_size": 63488 00:13:28.536 }, 00:13:28.536 { 00:13:28.536 "name": null, 00:13:28.536 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:28.536 "is_configured": false, 00:13:28.536 "data_offset": 0, 00:13:28.536 "data_size": 63488 00:13:28.536 }, 00:13:28.536 { 00:13:28.536 "name": "BaseBdev3", 00:13:28.536 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:28.536 "is_configured": true, 00:13:28.536 "data_offset": 2048, 00:13:28.536 "data_size": 63488 00:13:28.536 } 00:13:28.536 ] 00:13:28.536 }' 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.536 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.105 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.106 [2024-11-21 04:59:45.658522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.106 "name": "Existed_Raid", 00:13:29.106 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:29.106 "strip_size_kb": 64, 00:13:29.106 "state": "configuring", 00:13:29.106 "raid_level": "raid5f", 00:13:29.106 "superblock": true, 00:13:29.106 "num_base_bdevs": 3, 00:13:29.106 "num_base_bdevs_discovered": 1, 00:13:29.106 "num_base_bdevs_operational": 3, 00:13:29.106 "base_bdevs_list": [ 00:13:29.106 { 00:13:29.106 "name": null, 00:13:29.106 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:29.106 "is_configured": false, 00:13:29.106 "data_offset": 0, 00:13:29.106 "data_size": 63488 00:13:29.106 }, 00:13:29.106 { 00:13:29.106 "name": null, 00:13:29.106 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:29.106 "is_configured": false, 00:13:29.106 "data_offset": 0, 00:13:29.106 "data_size": 63488 00:13:29.106 }, 00:13:29.106 { 00:13:29.106 "name": "BaseBdev3", 00:13:29.106 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:29.106 "is_configured": true, 00:13:29.106 "data_offset": 2048, 00:13:29.106 "data_size": 63488 00:13:29.106 } 00:13:29.106 ] 00:13:29.106 }' 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.106 04:59:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 [2024-11-21 04:59:46.165674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.676 "name": "Existed_Raid", 00:13:29.676 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:29.676 "strip_size_kb": 64, 00:13:29.676 "state": "configuring", 00:13:29.676 "raid_level": "raid5f", 00:13:29.676 "superblock": true, 00:13:29.676 "num_base_bdevs": 3, 00:13:29.676 "num_base_bdevs_discovered": 2, 00:13:29.676 "num_base_bdevs_operational": 3, 00:13:29.676 "base_bdevs_list": [ 00:13:29.676 { 00:13:29.676 "name": null, 00:13:29.676 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:29.676 "is_configured": false, 00:13:29.676 "data_offset": 0, 00:13:29.676 "data_size": 63488 00:13:29.676 }, 00:13:29.676 { 00:13:29.676 "name": "BaseBdev2", 00:13:29.676 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:29.676 "is_configured": true, 00:13:29.676 "data_offset": 2048, 00:13:29.676 "data_size": 63488 00:13:29.676 }, 00:13:29.676 { 00:13:29.676 "name": "BaseBdev3", 00:13:29.676 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:29.676 "is_configured": true, 00:13:29.676 "data_offset": 2048, 00:13:29.676 "data_size": 63488 00:13:29.676 } 00:13:29.676 ] 00:13:29.676 }' 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.676 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.936 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d48fafbd-662a-4026-aafb-5322b08b8f60 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.196 [2024-11-21 04:59:46.701540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:30.196 [2024-11-21 04:59:46.701892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:30.196 [2024-11-21 04:59:46.701955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.196 NewBaseBdev 00:13:30.196 [2024-11-21 04:59:46.702349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:30.196 [2024-11-21 04:59:46.702885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.196 [2024-11-21 04:59:46.702953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:30.196 [2024-11-21 04:59:46.703164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.196 [ 00:13:30.196 { 00:13:30.196 "name": "NewBaseBdev", 00:13:30.196 "aliases": [ 00:13:30.196 "d48fafbd-662a-4026-aafb-5322b08b8f60" 00:13:30.196 ], 00:13:30.196 "product_name": "Malloc disk", 00:13:30.196 "block_size": 512, 00:13:30.196 "num_blocks": 65536, 00:13:30.196 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:30.196 "assigned_rate_limits": { 00:13:30.196 "rw_ios_per_sec": 0, 00:13:30.196 "rw_mbytes_per_sec": 0, 00:13:30.196 "r_mbytes_per_sec": 0, 00:13:30.196 "w_mbytes_per_sec": 0 00:13:30.196 }, 00:13:30.196 "claimed": true, 00:13:30.196 "claim_type": "exclusive_write", 00:13:30.196 "zoned": false, 00:13:30.196 "supported_io_types": { 00:13:30.196 "read": true, 00:13:30.196 "write": true, 00:13:30.196 "unmap": true, 00:13:30.196 "flush": true, 00:13:30.196 "reset": true, 00:13:30.196 "nvme_admin": false, 00:13:30.196 "nvme_io": false, 00:13:30.196 "nvme_io_md": false, 00:13:30.196 "write_zeroes": true, 00:13:30.196 "zcopy": true, 00:13:30.196 "get_zone_info": false, 00:13:30.196 "zone_management": false, 00:13:30.196 "zone_append": false, 00:13:30.196 "compare": false, 00:13:30.196 "compare_and_write": false, 00:13:30.196 "abort": true, 00:13:30.196 "seek_hole": false, 00:13:30.196 "seek_data": false, 00:13:30.196 "copy": true, 00:13:30.196 "nvme_iov_md": false 00:13:30.196 }, 00:13:30.196 "memory_domains": [ 00:13:30.196 { 00:13:30.196 "dma_device_id": "system", 00:13:30.196 "dma_device_type": 1 00:13:30.196 }, 00:13:30.196 { 00:13:30.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.196 "dma_device_type": 2 00:13:30.196 } 00:13:30.196 ], 00:13:30.196 "driver_specific": {} 00:13:30.196 } 00:13:30.196 ] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.196 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.196 "name": "Existed_Raid", 00:13:30.196 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:30.196 "strip_size_kb": 64, 00:13:30.196 "state": "online", 00:13:30.196 "raid_level": "raid5f", 00:13:30.196 "superblock": true, 00:13:30.196 "num_base_bdevs": 3, 00:13:30.196 "num_base_bdevs_discovered": 3, 00:13:30.196 "num_base_bdevs_operational": 3, 00:13:30.196 "base_bdevs_list": [ 00:13:30.196 { 00:13:30.196 "name": "NewBaseBdev", 00:13:30.196 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:30.196 "is_configured": true, 00:13:30.196 "data_offset": 2048, 00:13:30.196 "data_size": 63488 00:13:30.196 }, 00:13:30.196 { 00:13:30.196 "name": "BaseBdev2", 00:13:30.197 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:30.197 "is_configured": true, 00:13:30.197 "data_offset": 2048, 00:13:30.197 "data_size": 63488 00:13:30.197 }, 00:13:30.197 { 00:13:30.197 "name": "BaseBdev3", 00:13:30.197 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:30.197 "is_configured": true, 00:13:30.197 "data_offset": 2048, 00:13:30.197 "data_size": 63488 00:13:30.197 } 00:13:30.197 ] 00:13:30.197 }' 00:13:30.197 04:59:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.197 04:59:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.456 [2024-11-21 04:59:47.160990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.456 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.716 "name": "Existed_Raid", 00:13:30.716 "aliases": [ 00:13:30.716 "0dff051a-6a5d-4a96-8ca9-209aee0edfe0" 00:13:30.716 ], 00:13:30.716 "product_name": "Raid Volume", 00:13:30.716 "block_size": 512, 00:13:30.716 "num_blocks": 126976, 00:13:30.716 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:30.716 "assigned_rate_limits": { 00:13:30.716 "rw_ios_per_sec": 0, 00:13:30.716 "rw_mbytes_per_sec": 0, 00:13:30.716 "r_mbytes_per_sec": 0, 00:13:30.716 "w_mbytes_per_sec": 0 00:13:30.716 }, 00:13:30.716 "claimed": false, 00:13:30.716 "zoned": false, 00:13:30.716 "supported_io_types": { 00:13:30.716 "read": true, 00:13:30.716 "write": true, 00:13:30.716 "unmap": false, 00:13:30.716 "flush": false, 00:13:30.716 "reset": true, 00:13:30.716 "nvme_admin": false, 00:13:30.716 "nvme_io": false, 00:13:30.716 "nvme_io_md": false, 00:13:30.716 "write_zeroes": true, 00:13:30.716 "zcopy": false, 00:13:30.716 "get_zone_info": false, 00:13:30.716 "zone_management": false, 00:13:30.716 "zone_append": false, 00:13:30.716 "compare": false, 00:13:30.716 "compare_and_write": false, 00:13:30.716 "abort": false, 00:13:30.716 "seek_hole": false, 00:13:30.716 "seek_data": false, 00:13:30.716 "copy": false, 00:13:30.716 "nvme_iov_md": false 00:13:30.716 }, 00:13:30.716 "driver_specific": { 00:13:30.716 "raid": { 00:13:30.716 "uuid": "0dff051a-6a5d-4a96-8ca9-209aee0edfe0", 00:13:30.716 "strip_size_kb": 64, 00:13:30.716 "state": "online", 00:13:30.716 "raid_level": "raid5f", 00:13:30.716 "superblock": true, 00:13:30.716 "num_base_bdevs": 3, 00:13:30.716 "num_base_bdevs_discovered": 3, 00:13:30.716 "num_base_bdevs_operational": 3, 00:13:30.716 "base_bdevs_list": [ 00:13:30.716 { 00:13:30.716 "name": "NewBaseBdev", 00:13:30.716 "uuid": "d48fafbd-662a-4026-aafb-5322b08b8f60", 00:13:30.716 "is_configured": true, 00:13:30.716 "data_offset": 2048, 00:13:30.716 "data_size": 63488 00:13:30.716 }, 00:13:30.716 { 00:13:30.716 "name": "BaseBdev2", 00:13:30.716 "uuid": "d45a13b3-4d84-4f8f-b9bc-21382624d906", 00:13:30.716 "is_configured": true, 00:13:30.716 "data_offset": 2048, 00:13:30.716 "data_size": 63488 00:13:30.716 }, 00:13:30.716 { 00:13:30.716 "name": "BaseBdev3", 00:13:30.716 "uuid": "2ad6f449-601d-43ff-9adc-26bd5d994dcb", 00:13:30.716 "is_configured": true, 00:13:30.716 "data_offset": 2048, 00:13:30.716 "data_size": 63488 00:13:30.716 } 00:13:30.716 ] 00:13:30.716 } 00:13:30.716 } 00:13:30.716 }' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.716 BaseBdev2 00:13:30.716 BaseBdev3' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.716 [2024-11-21 04:59:47.436287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.716 [2024-11-21 04:59:47.436366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.716 [2024-11-21 04:59:47.436470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.716 [2024-11-21 04:59:47.436784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.716 [2024-11-21 04:59:47.436814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91187 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 91187 ']' 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 91187 00:13:30.716 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91187 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.976 killing process with pid 91187 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91187' 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 91187 00:13:30.976 [2024-11-21 04:59:47.484534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.976 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 91187 00:13:30.976 [2024-11-21 04:59:47.544594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.237 04:59:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:31.237 00:13:31.237 real 0m9.020s 00:13:31.237 user 0m15.047s 00:13:31.237 sys 0m1.981s 00:13:31.237 ************************************ 00:13:31.237 END TEST raid5f_state_function_test_sb 00:13:31.237 ************************************ 00:13:31.237 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.237 04:59:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.237 04:59:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:31.237 04:59:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:31.237 04:59:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.237 04:59:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.237 ************************************ 00:13:31.237 START TEST raid5f_superblock_test 00:13:31.237 ************************************ 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91791 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91791 00:13:31.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 91791 ']' 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.237 04:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.497 [2024-11-21 04:59:48.053567] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:31.497 [2024-11-21 04:59:48.053813] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91791 ] 00:13:31.757 [2024-11-21 04:59:48.231638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.757 [2024-11-21 04:59:48.273478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.757 [2024-11-21 04:59:48.350275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.757 [2024-11-21 04:59:48.350421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 malloc1 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 [2024-11-21 04:59:48.889076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.327 [2024-11-21 04:59:48.889190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.327 [2024-11-21 04:59:48.889218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.327 [2024-11-21 04:59:48.889238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.327 [2024-11-21 04:59:48.891760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.327 [2024-11-21 04:59:48.891810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.327 pt1 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 malloc2 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 [2024-11-21 04:59:48.923565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:32.327 [2024-11-21 04:59:48.923696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.327 [2024-11-21 04:59:48.923738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.327 [2024-11-21 04:59:48.923779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.327 [2024-11-21 04:59:48.926268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.327 [2024-11-21 04:59:48.926368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.327 pt2 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 malloc3 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.327 [2024-11-21 04:59:48.962053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:32.327 [2024-11-21 04:59:48.962193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.327 [2024-11-21 04:59:48.962237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:32.327 [2024-11-21 04:59:48.962280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.327 [2024-11-21 04:59:48.964690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.327 [2024-11-21 04:59:48.964774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:32.327 pt3 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:32.327 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.328 [2024-11-21 04:59:48.974108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:32.328 [2024-11-21 04:59:48.976367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.328 [2024-11-21 04:59:48.976479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:32.328 [2024-11-21 04:59:48.976715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:32.328 [2024-11-21 04:59:48.976783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:32.328 [2024-11-21 04:59:48.977137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:32.328 [2024-11-21 04:59:48.977665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:32.328 [2024-11-21 04:59:48.977723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:32.328 [2024-11-21 04:59:48.977969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.328 04:59:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.328 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.328 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.328 "name": "raid_bdev1", 00:13:32.328 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:32.328 "strip_size_kb": 64, 00:13:32.328 "state": "online", 00:13:32.328 "raid_level": "raid5f", 00:13:32.328 "superblock": true, 00:13:32.328 "num_base_bdevs": 3, 00:13:32.328 "num_base_bdevs_discovered": 3, 00:13:32.328 "num_base_bdevs_operational": 3, 00:13:32.328 "base_bdevs_list": [ 00:13:32.328 { 00:13:32.328 "name": "pt1", 00:13:32.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.328 "is_configured": true, 00:13:32.328 "data_offset": 2048, 00:13:32.328 "data_size": 63488 00:13:32.328 }, 00:13:32.328 { 00:13:32.328 "name": "pt2", 00:13:32.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.328 "is_configured": true, 00:13:32.328 "data_offset": 2048, 00:13:32.328 "data_size": 63488 00:13:32.328 }, 00:13:32.328 { 00:13:32.328 "name": "pt3", 00:13:32.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.328 "is_configured": true, 00:13:32.328 "data_offset": 2048, 00:13:32.328 "data_size": 63488 00:13:32.328 } 00:13:32.328 ] 00:13:32.328 }' 00:13:32.328 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.328 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:32.907 [2024-11-21 04:59:49.389560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.907 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:32.907 "name": "raid_bdev1", 00:13:32.907 "aliases": [ 00:13:32.907 "643590df-ac27-4c4d-bd3a-49f5b29007b6" 00:13:32.907 ], 00:13:32.907 "product_name": "Raid Volume", 00:13:32.907 "block_size": 512, 00:13:32.907 "num_blocks": 126976, 00:13:32.907 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:32.907 "assigned_rate_limits": { 00:13:32.907 "rw_ios_per_sec": 0, 00:13:32.907 "rw_mbytes_per_sec": 0, 00:13:32.907 "r_mbytes_per_sec": 0, 00:13:32.907 "w_mbytes_per_sec": 0 00:13:32.907 }, 00:13:32.907 "claimed": false, 00:13:32.907 "zoned": false, 00:13:32.907 "supported_io_types": { 00:13:32.907 "read": true, 00:13:32.907 "write": true, 00:13:32.907 "unmap": false, 00:13:32.907 "flush": false, 00:13:32.907 "reset": true, 00:13:32.907 "nvme_admin": false, 00:13:32.907 "nvme_io": false, 00:13:32.907 "nvme_io_md": false, 00:13:32.907 "write_zeroes": true, 00:13:32.907 "zcopy": false, 00:13:32.907 "get_zone_info": false, 00:13:32.907 "zone_management": false, 00:13:32.907 "zone_append": false, 00:13:32.907 "compare": false, 00:13:32.907 "compare_and_write": false, 00:13:32.907 "abort": false, 00:13:32.907 "seek_hole": false, 00:13:32.907 "seek_data": false, 00:13:32.907 "copy": false, 00:13:32.907 "nvme_iov_md": false 00:13:32.907 }, 00:13:32.907 "driver_specific": { 00:13:32.907 "raid": { 00:13:32.907 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:32.907 "strip_size_kb": 64, 00:13:32.907 "state": "online", 00:13:32.907 "raid_level": "raid5f", 00:13:32.907 "superblock": true, 00:13:32.907 "num_base_bdevs": 3, 00:13:32.907 "num_base_bdevs_discovered": 3, 00:13:32.907 "num_base_bdevs_operational": 3, 00:13:32.907 "base_bdevs_list": [ 00:13:32.907 { 00:13:32.907 "name": "pt1", 00:13:32.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.907 "is_configured": true, 00:13:32.907 "data_offset": 2048, 00:13:32.907 "data_size": 63488 00:13:32.908 }, 00:13:32.908 { 00:13:32.908 "name": "pt2", 00:13:32.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.908 "is_configured": true, 00:13:32.908 "data_offset": 2048, 00:13:32.908 "data_size": 63488 00:13:32.908 }, 00:13:32.908 { 00:13:32.908 "name": "pt3", 00:13:32.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.908 "is_configured": true, 00:13:32.908 "data_offset": 2048, 00:13:32.908 "data_size": 63488 00:13:32.908 } 00:13:32.908 ] 00:13:32.908 } 00:13:32.908 } 00:13:32.908 }' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:32.908 pt2 00:13:32.908 pt3' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.908 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 [2024-11-21 04:59:49.669008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=643590df-ac27-4c4d-bd3a-49f5b29007b6 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 643590df-ac27-4c4d-bd3a-49f5b29007b6 ']' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 [2024-11-21 04:59:49.712740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.168 [2024-11-21 04:59:49.712764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.168 [2024-11-21 04:59:49.712841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.168 [2024-11-21 04:59:49.712932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.168 [2024-11-21 04:59:49.712947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:33.168 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.169 [2024-11-21 04:59:49.844531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:33.169 [2024-11-21 04:59:49.846429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:33.169 [2024-11-21 04:59:49.846523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:33.169 [2024-11-21 04:59:49.846596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:33.169 [2024-11-21 04:59:49.846646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:33.169 [2024-11-21 04:59:49.846669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:33.169 [2024-11-21 04:59:49.846682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.169 [2024-11-21 04:59:49.846693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:33.169 request: 00:13:33.169 { 00:13:33.169 "name": "raid_bdev1", 00:13:33.169 "raid_level": "raid5f", 00:13:33.169 "base_bdevs": [ 00:13:33.169 "malloc1", 00:13:33.169 "malloc2", 00:13:33.169 "malloc3" 00:13:33.169 ], 00:13:33.169 "strip_size_kb": 64, 00:13:33.169 "superblock": false, 00:13:33.169 "method": "bdev_raid_create", 00:13:33.169 "req_id": 1 00:13:33.169 } 00:13:33.169 Got JSON-RPC error response 00:13:33.169 response: 00:13:33.169 { 00:13:33.169 "code": -17, 00:13:33.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:33.169 } 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:33.169 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.429 [2024-11-21 04:59:49.908390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:33.429 [2024-11-21 04:59:49.908481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.429 [2024-11-21 04:59:49.908515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:33.429 [2024-11-21 04:59:49.908546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.429 [2024-11-21 04:59:49.910672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.429 [2024-11-21 04:59:49.910747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:33.429 [2024-11-21 04:59:49.910836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:33.429 [2024-11-21 04:59:49.910912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.429 pt1 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.429 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.429 "name": "raid_bdev1", 00:13:33.429 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:33.429 "strip_size_kb": 64, 00:13:33.429 "state": "configuring", 00:13:33.429 "raid_level": "raid5f", 00:13:33.429 "superblock": true, 00:13:33.429 "num_base_bdevs": 3, 00:13:33.429 "num_base_bdevs_discovered": 1, 00:13:33.430 "num_base_bdevs_operational": 3, 00:13:33.430 "base_bdevs_list": [ 00:13:33.430 { 00:13:33.430 "name": "pt1", 00:13:33.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.430 "is_configured": true, 00:13:33.430 "data_offset": 2048, 00:13:33.430 "data_size": 63488 00:13:33.430 }, 00:13:33.430 { 00:13:33.430 "name": null, 00:13:33.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.430 "is_configured": false, 00:13:33.430 "data_offset": 2048, 00:13:33.430 "data_size": 63488 00:13:33.430 }, 00:13:33.430 { 00:13:33.430 "name": null, 00:13:33.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.430 "is_configured": false, 00:13:33.430 "data_offset": 2048, 00:13:33.430 "data_size": 63488 00:13:33.430 } 00:13:33.430 ] 00:13:33.430 }' 00:13:33.430 04:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.430 04:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 [2024-11-21 04:59:50.327716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.690 [2024-11-21 04:59:50.327815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.690 [2024-11-21 04:59:50.327872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:33.690 [2024-11-21 04:59:50.327915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.690 [2024-11-21 04:59:50.328377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.690 [2024-11-21 04:59:50.328440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.690 [2024-11-21 04:59:50.328560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:33.690 [2024-11-21 04:59:50.328592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.690 pt2 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 [2024-11-21 04:59:50.339676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.690 "name": "raid_bdev1", 00:13:33.690 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:33.690 "strip_size_kb": 64, 00:13:33.690 "state": "configuring", 00:13:33.690 "raid_level": "raid5f", 00:13:33.690 "superblock": true, 00:13:33.690 "num_base_bdevs": 3, 00:13:33.690 "num_base_bdevs_discovered": 1, 00:13:33.690 "num_base_bdevs_operational": 3, 00:13:33.690 "base_bdevs_list": [ 00:13:33.690 { 00:13:33.690 "name": "pt1", 00:13:33.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.690 "is_configured": true, 00:13:33.690 "data_offset": 2048, 00:13:33.690 "data_size": 63488 00:13:33.690 }, 00:13:33.690 { 00:13:33.690 "name": null, 00:13:33.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.690 "is_configured": false, 00:13:33.690 "data_offset": 0, 00:13:33.690 "data_size": 63488 00:13:33.690 }, 00:13:33.690 { 00:13:33.690 "name": null, 00:13:33.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.690 "is_configured": false, 00:13:33.690 "data_offset": 2048, 00:13:33.690 "data_size": 63488 00:13:33.690 } 00:13:33.690 ] 00:13:33.690 }' 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.690 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.287 [2024-11-21 04:59:50.774978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.287 [2024-11-21 04:59:50.775043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.287 [2024-11-21 04:59:50.775065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:34.287 [2024-11-21 04:59:50.775074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.287 [2024-11-21 04:59:50.775486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.287 [2024-11-21 04:59:50.775503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.287 [2024-11-21 04:59:50.775582] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:34.287 [2024-11-21 04:59:50.775604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.287 pt2 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.287 [2024-11-21 04:59:50.782923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.287 [2024-11-21 04:59:50.783004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.287 [2024-11-21 04:59:50.783044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:34.287 [2024-11-21 04:59:50.783052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.287 [2024-11-21 04:59:50.783417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.287 [2024-11-21 04:59:50.783434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.287 [2024-11-21 04:59:50.783490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:34.287 [2024-11-21 04:59:50.783507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:34.287 [2024-11-21 04:59:50.783602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:34.287 [2024-11-21 04:59:50.783610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:34.287 [2024-11-21 04:59:50.783823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:34.287 [2024-11-21 04:59:50.784308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:34.287 [2024-11-21 04:59:50.784330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:34.287 [2024-11-21 04:59:50.784435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.287 pt3 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.287 "name": "raid_bdev1", 00:13:34.287 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:34.287 "strip_size_kb": 64, 00:13:34.287 "state": "online", 00:13:34.287 "raid_level": "raid5f", 00:13:34.287 "superblock": true, 00:13:34.287 "num_base_bdevs": 3, 00:13:34.287 "num_base_bdevs_discovered": 3, 00:13:34.287 "num_base_bdevs_operational": 3, 00:13:34.287 "base_bdevs_list": [ 00:13:34.287 { 00:13:34.287 "name": "pt1", 00:13:34.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.287 "is_configured": true, 00:13:34.287 "data_offset": 2048, 00:13:34.287 "data_size": 63488 00:13:34.287 }, 00:13:34.287 { 00:13:34.287 "name": "pt2", 00:13:34.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.287 "is_configured": true, 00:13:34.287 "data_offset": 2048, 00:13:34.287 "data_size": 63488 00:13:34.287 }, 00:13:34.287 { 00:13:34.287 "name": "pt3", 00:13:34.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.287 "is_configured": true, 00:13:34.287 "data_offset": 2048, 00:13:34.287 "data_size": 63488 00:13:34.287 } 00:13:34.287 ] 00:13:34.287 }' 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.287 04:59:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.547 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:34.547 [2024-11-21 04:59:51.246394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.548 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.548 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:34.548 "name": "raid_bdev1", 00:13:34.548 "aliases": [ 00:13:34.548 "643590df-ac27-4c4d-bd3a-49f5b29007b6" 00:13:34.548 ], 00:13:34.548 "product_name": "Raid Volume", 00:13:34.548 "block_size": 512, 00:13:34.548 "num_blocks": 126976, 00:13:34.548 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:34.548 "assigned_rate_limits": { 00:13:34.548 "rw_ios_per_sec": 0, 00:13:34.548 "rw_mbytes_per_sec": 0, 00:13:34.548 "r_mbytes_per_sec": 0, 00:13:34.548 "w_mbytes_per_sec": 0 00:13:34.548 }, 00:13:34.548 "claimed": false, 00:13:34.548 "zoned": false, 00:13:34.548 "supported_io_types": { 00:13:34.548 "read": true, 00:13:34.548 "write": true, 00:13:34.548 "unmap": false, 00:13:34.548 "flush": false, 00:13:34.548 "reset": true, 00:13:34.548 "nvme_admin": false, 00:13:34.548 "nvme_io": false, 00:13:34.548 "nvme_io_md": false, 00:13:34.548 "write_zeroes": true, 00:13:34.548 "zcopy": false, 00:13:34.548 "get_zone_info": false, 00:13:34.548 "zone_management": false, 00:13:34.548 "zone_append": false, 00:13:34.548 "compare": false, 00:13:34.548 "compare_and_write": false, 00:13:34.548 "abort": false, 00:13:34.548 "seek_hole": false, 00:13:34.548 "seek_data": false, 00:13:34.548 "copy": false, 00:13:34.548 "nvme_iov_md": false 00:13:34.548 }, 00:13:34.548 "driver_specific": { 00:13:34.548 "raid": { 00:13:34.548 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:34.548 "strip_size_kb": 64, 00:13:34.548 "state": "online", 00:13:34.548 "raid_level": "raid5f", 00:13:34.548 "superblock": true, 00:13:34.548 "num_base_bdevs": 3, 00:13:34.548 "num_base_bdevs_discovered": 3, 00:13:34.548 "num_base_bdevs_operational": 3, 00:13:34.548 "base_bdevs_list": [ 00:13:34.548 { 00:13:34.548 "name": "pt1", 00:13:34.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.548 "is_configured": true, 00:13:34.548 "data_offset": 2048, 00:13:34.548 "data_size": 63488 00:13:34.548 }, 00:13:34.548 { 00:13:34.548 "name": "pt2", 00:13:34.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.548 "is_configured": true, 00:13:34.548 "data_offset": 2048, 00:13:34.548 "data_size": 63488 00:13:34.548 }, 00:13:34.548 { 00:13:34.548 "name": "pt3", 00:13:34.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.548 "is_configured": true, 00:13:34.548 "data_offset": 2048, 00:13:34.548 "data_size": 63488 00:13:34.548 } 00:13:34.548 ] 00:13:34.548 } 00:13:34.548 } 00:13:34.548 }' 00:13:34.548 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:34.808 pt2 00:13:34.808 pt3' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:34.808 [2024-11-21 04:59:51.505855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.808 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 643590df-ac27-4c4d-bd3a-49f5b29007b6 '!=' 643590df-ac27-4c4d-bd3a-49f5b29007b6 ']' 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.068 [2024-11-21 04:59:51.557627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.068 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.068 "name": "raid_bdev1", 00:13:35.068 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:35.068 "strip_size_kb": 64, 00:13:35.068 "state": "online", 00:13:35.068 "raid_level": "raid5f", 00:13:35.068 "superblock": true, 00:13:35.068 "num_base_bdevs": 3, 00:13:35.068 "num_base_bdevs_discovered": 2, 00:13:35.068 "num_base_bdevs_operational": 2, 00:13:35.068 "base_bdevs_list": [ 00:13:35.068 { 00:13:35.068 "name": null, 00:13:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.068 "is_configured": false, 00:13:35.068 "data_offset": 0, 00:13:35.068 "data_size": 63488 00:13:35.068 }, 00:13:35.068 { 00:13:35.068 "name": "pt2", 00:13:35.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.069 "is_configured": true, 00:13:35.069 "data_offset": 2048, 00:13:35.069 "data_size": 63488 00:13:35.069 }, 00:13:35.069 { 00:13:35.069 "name": "pt3", 00:13:35.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.069 "is_configured": true, 00:13:35.069 "data_offset": 2048, 00:13:35.069 "data_size": 63488 00:13:35.069 } 00:13:35.069 ] 00:13:35.069 }' 00:13:35.069 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.069 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 [2024-11-21 04:59:51.980860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.329 [2024-11-21 04:59:51.980893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.329 [2024-11-21 04:59:51.980964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.329 [2024-11-21 04:59:51.981024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.329 [2024-11-21 04:59:51.981032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:35.329 04:59:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.329 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.329 [2024-11-21 04:59:52.056709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.329 [2024-11-21 04:59:52.056772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.329 [2024-11-21 04:59:52.056790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:35.329 [2024-11-21 04:59:52.056799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.329 [2024-11-21 04:59:52.058993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.329 [2024-11-21 04:59:52.059031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.329 [2024-11-21 04:59:52.059112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:35.329 [2024-11-21 04:59:52.059147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.589 pt2 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.589 "name": "raid_bdev1", 00:13:35.589 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:35.589 "strip_size_kb": 64, 00:13:35.589 "state": "configuring", 00:13:35.589 "raid_level": "raid5f", 00:13:35.589 "superblock": true, 00:13:35.589 "num_base_bdevs": 3, 00:13:35.589 "num_base_bdevs_discovered": 1, 00:13:35.589 "num_base_bdevs_operational": 2, 00:13:35.589 "base_bdevs_list": [ 00:13:35.589 { 00:13:35.589 "name": null, 00:13:35.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.589 "is_configured": false, 00:13:35.589 "data_offset": 2048, 00:13:35.589 "data_size": 63488 00:13:35.589 }, 00:13:35.589 { 00:13:35.589 "name": "pt2", 00:13:35.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.589 "is_configured": true, 00:13:35.589 "data_offset": 2048, 00:13:35.589 "data_size": 63488 00:13:35.589 }, 00:13:35.589 { 00:13:35.589 "name": null, 00:13:35.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.589 "is_configured": false, 00:13:35.589 "data_offset": 2048, 00:13:35.589 "data_size": 63488 00:13:35.589 } 00:13:35.589 ] 00:13:35.589 }' 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.589 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.849 [2024-11-21 04:59:52.468025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.849 [2024-11-21 04:59:52.468149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.849 [2024-11-21 04:59:52.468232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:35.849 [2024-11-21 04:59:52.468274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.849 [2024-11-21 04:59:52.468739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.849 [2024-11-21 04:59:52.468795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.849 [2024-11-21 04:59:52.468919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:35.849 [2024-11-21 04:59:52.468999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.849 [2024-11-21 04:59:52.469168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:35.849 [2024-11-21 04:59:52.469213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:35.849 [2024-11-21 04:59:52.469500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:35.849 [2024-11-21 04:59:52.469974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:35.849 [2024-11-21 04:59:52.470023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:35.849 [2024-11-21 04:59:52.470358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.849 pt3 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.849 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.850 "name": "raid_bdev1", 00:13:35.850 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:35.850 "strip_size_kb": 64, 00:13:35.850 "state": "online", 00:13:35.850 "raid_level": "raid5f", 00:13:35.850 "superblock": true, 00:13:35.850 "num_base_bdevs": 3, 00:13:35.850 "num_base_bdevs_discovered": 2, 00:13:35.850 "num_base_bdevs_operational": 2, 00:13:35.850 "base_bdevs_list": [ 00:13:35.850 { 00:13:35.850 "name": null, 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.850 "is_configured": false, 00:13:35.850 "data_offset": 2048, 00:13:35.850 "data_size": 63488 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "name": "pt2", 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.850 "is_configured": true, 00:13:35.850 "data_offset": 2048, 00:13:35.850 "data_size": 63488 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "name": "pt3", 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.850 "is_configured": true, 00:13:35.850 "data_offset": 2048, 00:13:35.850 "data_size": 63488 00:13:35.850 } 00:13:35.850 ] 00:13:35.850 }' 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.850 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 [2024-11-21 04:59:52.923253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.419 [2024-11-21 04:59:52.923282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.419 [2024-11-21 04:59:52.923370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.419 [2024-11-21 04:59:52.923440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.419 [2024-11-21 04:59:52.923453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 [2024-11-21 04:59:52.991169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.419 [2024-11-21 04:59:52.991269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.419 [2024-11-21 04:59:52.991304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:36.419 [2024-11-21 04:59:52.991344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.419 [2024-11-21 04:59:52.993739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.419 [2024-11-21 04:59:52.993813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.419 [2024-11-21 04:59:52.993923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:36.419 [2024-11-21 04:59:52.993994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.419 [2024-11-21 04:59:52.994171] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:36.419 [2024-11-21 04:59:52.994236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.419 [2024-11-21 04:59:52.994312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:36.419 [2024-11-21 04:59:52.994429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.419 pt1 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.419 04:59:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.419 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.419 "name": "raid_bdev1", 00:13:36.419 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:36.419 "strip_size_kb": 64, 00:13:36.419 "state": "configuring", 00:13:36.419 "raid_level": "raid5f", 00:13:36.419 "superblock": true, 00:13:36.419 "num_base_bdevs": 3, 00:13:36.419 "num_base_bdevs_discovered": 1, 00:13:36.419 "num_base_bdevs_operational": 2, 00:13:36.419 "base_bdevs_list": [ 00:13:36.419 { 00:13:36.419 "name": null, 00:13:36.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.419 "is_configured": false, 00:13:36.419 "data_offset": 2048, 00:13:36.419 "data_size": 63488 00:13:36.419 }, 00:13:36.419 { 00:13:36.419 "name": "pt2", 00:13:36.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.419 "is_configured": true, 00:13:36.419 "data_offset": 2048, 00:13:36.419 "data_size": 63488 00:13:36.419 }, 00:13:36.419 { 00:13:36.419 "name": null, 00:13:36.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.419 "is_configured": false, 00:13:36.419 "data_offset": 2048, 00:13:36.419 "data_size": 63488 00:13:36.419 } 00:13:36.420 ] 00:13:36.420 }' 00:13:36.420 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.420 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.689 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:36.689 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:36.689 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.689 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 [2024-11-21 04:59:53.446352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:36.949 [2024-11-21 04:59:53.446417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.949 [2024-11-21 04:59:53.446437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:36.949 [2024-11-21 04:59:53.446448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.949 [2024-11-21 04:59:53.446845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.949 [2024-11-21 04:59:53.446867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:36.949 [2024-11-21 04:59:53.446936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:36.949 [2024-11-21 04:59:53.446960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:36.949 [2024-11-21 04:59:53.447045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:36.949 [2024-11-21 04:59:53.447057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:36.949 [2024-11-21 04:59:53.447291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:36.949 [2024-11-21 04:59:53.447766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:36.949 [2024-11-21 04:59:53.447779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:36.949 [2024-11-21 04:59:53.447967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.949 pt3 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.949 "name": "raid_bdev1", 00:13:36.949 "uuid": "643590df-ac27-4c4d-bd3a-49f5b29007b6", 00:13:36.949 "strip_size_kb": 64, 00:13:36.949 "state": "online", 00:13:36.949 "raid_level": "raid5f", 00:13:36.949 "superblock": true, 00:13:36.949 "num_base_bdevs": 3, 00:13:36.949 "num_base_bdevs_discovered": 2, 00:13:36.949 "num_base_bdevs_operational": 2, 00:13:36.949 "base_bdevs_list": [ 00:13:36.949 { 00:13:36.949 "name": null, 00:13:36.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.949 "is_configured": false, 00:13:36.949 "data_offset": 2048, 00:13:36.949 "data_size": 63488 00:13:36.949 }, 00:13:36.949 { 00:13:36.949 "name": "pt2", 00:13:36.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.949 "is_configured": true, 00:13:36.949 "data_offset": 2048, 00:13:36.949 "data_size": 63488 00:13:36.949 }, 00:13:36.949 { 00:13:36.949 "name": "pt3", 00:13:36.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.949 "is_configured": true, 00:13:36.949 "data_offset": 2048, 00:13:36.949 "data_size": 63488 00:13:36.949 } 00:13:36.949 ] 00:13:36.949 }' 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.949 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.209 [2024-11-21 04:59:53.921815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.209 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 643590df-ac27-4c4d-bd3a-49f5b29007b6 '!=' 643590df-ac27-4c4d-bd3a-49f5b29007b6 ']' 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91791 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 91791 ']' 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 91791 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91791 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91791' 00:13:37.467 killing process with pid 91791 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 91791 00:13:37.467 [2024-11-21 04:59:53.995977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.467 [2024-11-21 04:59:53.996129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.467 04:59:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 91791 00:13:37.467 [2024-11-21 04:59:53.996236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.467 [2024-11-21 04:59:53.996248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:37.467 [2024-11-21 04:59:54.031225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.727 04:59:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:37.727 00:13:37.727 real 0m6.293s 00:13:37.727 user 0m10.408s 00:13:37.727 sys 0m1.405s 00:13:37.727 04:59:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.727 04:59:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 ************************************ 00:13:37.727 END TEST raid5f_superblock_test 00:13:37.727 ************************************ 00:13:37.727 04:59:54 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:37.727 04:59:54 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:37.727 04:59:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:37.727 04:59:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.727 04:59:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 ************************************ 00:13:37.727 START TEST raid5f_rebuild_test 00:13:37.727 ************************************ 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92218 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92218 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 92218 ']' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.727 04:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:37.727 Zero copy mechanism will not be used. 00:13:37.727 [2024-11-21 04:59:54.457714] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:37.727 [2024-11-21 04:59:54.457920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92218 ] 00:13:37.987 [2024-11-21 04:59:54.612939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.987 [2024-11-21 04:59:54.638421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.987 [2024-11-21 04:59:54.682264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.987 [2024-11-21 04:59:54.682297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.927 BaseBdev1_malloc 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.927 [2024-11-21 04:59:55.321199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.927 [2024-11-21 04:59:55.321297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.927 [2024-11-21 04:59:55.321326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.927 [2024-11-21 04:59:55.321344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.927 [2024-11-21 04:59:55.323464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.927 [2024-11-21 04:59:55.323564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.927 BaseBdev1 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.927 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 BaseBdev2_malloc 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 [2024-11-21 04:59:55.345896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:38.928 [2024-11-21 04:59:55.345993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.928 [2024-11-21 04:59:55.346049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.928 [2024-11-21 04:59:55.346117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.928 [2024-11-21 04:59:55.348185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.928 [2024-11-21 04:59:55.348254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.928 BaseBdev2 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 BaseBdev3_malloc 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 [2024-11-21 04:59:55.366614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:38.928 [2024-11-21 04:59:55.366711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.928 [2024-11-21 04:59:55.366738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.928 [2024-11-21 04:59:55.366747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.928 [2024-11-21 04:59:55.368845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.928 [2024-11-21 04:59:55.368883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.928 BaseBdev3 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 spare_malloc 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 spare_delay 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 [2024-11-21 04:59:55.419374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.928 [2024-11-21 04:59:55.419427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.928 [2024-11-21 04:59:55.419454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:38.928 [2024-11-21 04:59:55.419464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.928 [2024-11-21 04:59:55.421897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.928 [2024-11-21 04:59:55.421941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.928 spare 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 [2024-11-21 04:59:55.427418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.928 [2024-11-21 04:59:55.429457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.928 [2024-11-21 04:59:55.429547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.928 [2024-11-21 04:59:55.429681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:38.928 [2024-11-21 04:59:55.429720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:38.928 [2024-11-21 04:59:55.430020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:38.928 [2024-11-21 04:59:55.430472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:38.928 [2024-11-21 04:59:55.430521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:38.928 [2024-11-21 04:59:55.430713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.928 "name": "raid_bdev1", 00:13:38.928 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:38.928 "strip_size_kb": 64, 00:13:38.928 "state": "online", 00:13:38.928 "raid_level": "raid5f", 00:13:38.928 "superblock": false, 00:13:38.928 "num_base_bdevs": 3, 00:13:38.928 "num_base_bdevs_discovered": 3, 00:13:38.928 "num_base_bdevs_operational": 3, 00:13:38.928 "base_bdevs_list": [ 00:13:38.928 { 00:13:38.928 "name": "BaseBdev1", 00:13:38.928 "uuid": "c7136ce4-63da-5a9a-a75b-20372737e1df", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 }, 00:13:38.928 { 00:13:38.928 "name": "BaseBdev2", 00:13:38.928 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 }, 00:13:38.928 { 00:13:38.928 "name": "BaseBdev3", 00:13:38.928 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 } 00:13:38.928 ] 00:13:38.928 }' 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.928 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.188 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.189 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.189 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 [2024-11-21 04:59:55.911539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:39.449 04:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.449 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.449 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:39.449 [2024-11-21 04:59:56.174964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:39.709 /dev/nbd0 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.709 1+0 records in 00:13:39.709 1+0 records out 00:13:39.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394125 s, 10.4 MB/s 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:39.709 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:39.969 512+0 records in 00:13:39.969 512+0 records out 00:13:39.969 67108864 bytes (67 MB, 64 MiB) copied, 0.284606 s, 236 MB/s 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.969 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.230 [2024-11-21 04:59:56.737223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.230 [2024-11-21 04:59:56.752073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.230 "name": "raid_bdev1", 00:13:40.230 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:40.230 "strip_size_kb": 64, 00:13:40.230 "state": "online", 00:13:40.230 "raid_level": "raid5f", 00:13:40.230 "superblock": false, 00:13:40.230 "num_base_bdevs": 3, 00:13:40.230 "num_base_bdevs_discovered": 2, 00:13:40.230 "num_base_bdevs_operational": 2, 00:13:40.230 "base_bdevs_list": [ 00:13:40.230 { 00:13:40.230 "name": null, 00:13:40.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.230 "is_configured": false, 00:13:40.230 "data_offset": 0, 00:13:40.230 "data_size": 65536 00:13:40.230 }, 00:13:40.230 { 00:13:40.230 "name": "BaseBdev2", 00:13:40.230 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:40.230 "is_configured": true, 00:13:40.230 "data_offset": 0, 00:13:40.230 "data_size": 65536 00:13:40.230 }, 00:13:40.230 { 00:13:40.230 "name": "BaseBdev3", 00:13:40.230 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:40.230 "is_configured": true, 00:13:40.230 "data_offset": 0, 00:13:40.230 "data_size": 65536 00:13:40.230 } 00:13:40.230 ] 00:13:40.230 }' 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.230 04:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.490 04:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.490 04:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.490 04:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.490 [2024-11-21 04:59:57.107489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.490 [2024-11-21 04:59:57.112126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:40.490 04:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.490 04:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.490 [2024-11-21 04:59:57.114311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.432 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.694 "name": "raid_bdev1", 00:13:41.694 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:41.694 "strip_size_kb": 64, 00:13:41.694 "state": "online", 00:13:41.694 "raid_level": "raid5f", 00:13:41.694 "superblock": false, 00:13:41.694 "num_base_bdevs": 3, 00:13:41.694 "num_base_bdevs_discovered": 3, 00:13:41.694 "num_base_bdevs_operational": 3, 00:13:41.694 "process": { 00:13:41.694 "type": "rebuild", 00:13:41.694 "target": "spare", 00:13:41.694 "progress": { 00:13:41.694 "blocks": 20480, 00:13:41.694 "percent": 15 00:13:41.694 } 00:13:41.694 }, 00:13:41.694 "base_bdevs_list": [ 00:13:41.694 { 00:13:41.694 "name": "spare", 00:13:41.694 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:41.694 "is_configured": true, 00:13:41.694 "data_offset": 0, 00:13:41.694 "data_size": 65536 00:13:41.694 }, 00:13:41.694 { 00:13:41.694 "name": "BaseBdev2", 00:13:41.694 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:41.694 "is_configured": true, 00:13:41.694 "data_offset": 0, 00:13:41.694 "data_size": 65536 00:13:41.694 }, 00:13:41.694 { 00:13:41.694 "name": "BaseBdev3", 00:13:41.694 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:41.694 "is_configured": true, 00:13:41.694 "data_offset": 0, 00:13:41.694 "data_size": 65536 00:13:41.694 } 00:13:41.694 ] 00:13:41.694 }' 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.694 [2024-11-21 04:59:58.274587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.694 [2024-11-21 04:59:58.321538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.694 [2024-11-21 04:59:58.321612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.694 [2024-11-21 04:59:58.321627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.694 [2024-11-21 04:59:58.321637] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.694 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.694 "name": "raid_bdev1", 00:13:41.694 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:41.694 "strip_size_kb": 64, 00:13:41.694 "state": "online", 00:13:41.694 "raid_level": "raid5f", 00:13:41.694 "superblock": false, 00:13:41.694 "num_base_bdevs": 3, 00:13:41.694 "num_base_bdevs_discovered": 2, 00:13:41.694 "num_base_bdevs_operational": 2, 00:13:41.694 "base_bdevs_list": [ 00:13:41.694 { 00:13:41.694 "name": null, 00:13:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.694 "is_configured": false, 00:13:41.694 "data_offset": 0, 00:13:41.694 "data_size": 65536 00:13:41.694 }, 00:13:41.694 { 00:13:41.694 "name": "BaseBdev2", 00:13:41.694 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:41.694 "is_configured": true, 00:13:41.694 "data_offset": 0, 00:13:41.694 "data_size": 65536 00:13:41.694 }, 00:13:41.694 { 00:13:41.694 "name": "BaseBdev3", 00:13:41.694 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:41.694 "is_configured": true, 00:13:41.694 "data_offset": 0, 00:13:41.695 "data_size": 65536 00:13:41.695 } 00:13:41.695 ] 00:13:41.695 }' 00:13:41.695 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.695 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.265 "name": "raid_bdev1", 00:13:42.265 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:42.265 "strip_size_kb": 64, 00:13:42.265 "state": "online", 00:13:42.265 "raid_level": "raid5f", 00:13:42.265 "superblock": false, 00:13:42.265 "num_base_bdevs": 3, 00:13:42.265 "num_base_bdevs_discovered": 2, 00:13:42.265 "num_base_bdevs_operational": 2, 00:13:42.265 "base_bdevs_list": [ 00:13:42.265 { 00:13:42.265 "name": null, 00:13:42.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.265 "is_configured": false, 00:13:42.265 "data_offset": 0, 00:13:42.265 "data_size": 65536 00:13:42.265 }, 00:13:42.265 { 00:13:42.265 "name": "BaseBdev2", 00:13:42.265 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:42.265 "is_configured": true, 00:13:42.265 "data_offset": 0, 00:13:42.265 "data_size": 65536 00:13:42.265 }, 00:13:42.265 { 00:13:42.265 "name": "BaseBdev3", 00:13:42.265 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:42.265 "is_configured": true, 00:13:42.265 "data_offset": 0, 00:13:42.265 "data_size": 65536 00:13:42.265 } 00:13:42.265 ] 00:13:42.265 }' 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.265 [2024-11-21 04:59:58.918564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.265 [2024-11-21 04:59:58.923044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.265 04:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:42.265 [2024-11-21 04:59:58.925261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.204 04:59:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.464 04:59:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.464 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.464 "name": "raid_bdev1", 00:13:43.464 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:43.464 "strip_size_kb": 64, 00:13:43.464 "state": "online", 00:13:43.464 "raid_level": "raid5f", 00:13:43.464 "superblock": false, 00:13:43.464 "num_base_bdevs": 3, 00:13:43.464 "num_base_bdevs_discovered": 3, 00:13:43.464 "num_base_bdevs_operational": 3, 00:13:43.464 "process": { 00:13:43.464 "type": "rebuild", 00:13:43.464 "target": "spare", 00:13:43.464 "progress": { 00:13:43.464 "blocks": 20480, 00:13:43.464 "percent": 15 00:13:43.464 } 00:13:43.464 }, 00:13:43.464 "base_bdevs_list": [ 00:13:43.464 { 00:13:43.464 "name": "spare", 00:13:43.464 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 }, 00:13:43.464 { 00:13:43.464 "name": "BaseBdev2", 00:13:43.464 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 }, 00:13:43.464 { 00:13:43.464 "name": "BaseBdev3", 00:13:43.464 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 } 00:13:43.464 ] 00:13:43.464 }' 00:13:43.464 04:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.464 "name": "raid_bdev1", 00:13:43.464 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:43.464 "strip_size_kb": 64, 00:13:43.464 "state": "online", 00:13:43.464 "raid_level": "raid5f", 00:13:43.464 "superblock": false, 00:13:43.464 "num_base_bdevs": 3, 00:13:43.464 "num_base_bdevs_discovered": 3, 00:13:43.464 "num_base_bdevs_operational": 3, 00:13:43.464 "process": { 00:13:43.464 "type": "rebuild", 00:13:43.464 "target": "spare", 00:13:43.464 "progress": { 00:13:43.464 "blocks": 22528, 00:13:43.464 "percent": 17 00:13:43.464 } 00:13:43.464 }, 00:13:43.464 "base_bdevs_list": [ 00:13:43.464 { 00:13:43.464 "name": "spare", 00:13:43.464 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 }, 00:13:43.464 { 00:13:43.464 "name": "BaseBdev2", 00:13:43.464 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 }, 00:13:43.464 { 00:13:43.464 "name": "BaseBdev3", 00:13:43.464 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:43.464 "is_configured": true, 00:13:43.464 "data_offset": 0, 00:13:43.464 "data_size": 65536 00:13:43.464 } 00:13:43.464 ] 00:13:43.464 }' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.464 05:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.844 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.844 "name": "raid_bdev1", 00:13:44.844 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:44.844 "strip_size_kb": 64, 00:13:44.844 "state": "online", 00:13:44.844 "raid_level": "raid5f", 00:13:44.844 "superblock": false, 00:13:44.844 "num_base_bdevs": 3, 00:13:44.844 "num_base_bdevs_discovered": 3, 00:13:44.844 "num_base_bdevs_operational": 3, 00:13:44.844 "process": { 00:13:44.844 "type": "rebuild", 00:13:44.844 "target": "spare", 00:13:44.844 "progress": { 00:13:44.844 "blocks": 45056, 00:13:44.844 "percent": 34 00:13:44.844 } 00:13:44.844 }, 00:13:44.844 "base_bdevs_list": [ 00:13:44.844 { 00:13:44.844 "name": "spare", 00:13:44.844 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:44.844 "is_configured": true, 00:13:44.844 "data_offset": 0, 00:13:44.844 "data_size": 65536 00:13:44.844 }, 00:13:44.844 { 00:13:44.844 "name": "BaseBdev2", 00:13:44.844 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:44.844 "is_configured": true, 00:13:44.844 "data_offset": 0, 00:13:44.844 "data_size": 65536 00:13:44.844 }, 00:13:44.844 { 00:13:44.844 "name": "BaseBdev3", 00:13:44.844 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:44.844 "is_configured": true, 00:13:44.844 "data_offset": 0, 00:13:44.844 "data_size": 65536 00:13:44.845 } 00:13:44.845 ] 00:13:44.845 }' 00:13:44.845 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.845 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.845 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.845 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.845 05:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.784 "name": "raid_bdev1", 00:13:45.784 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:45.784 "strip_size_kb": 64, 00:13:45.784 "state": "online", 00:13:45.784 "raid_level": "raid5f", 00:13:45.784 "superblock": false, 00:13:45.784 "num_base_bdevs": 3, 00:13:45.784 "num_base_bdevs_discovered": 3, 00:13:45.784 "num_base_bdevs_operational": 3, 00:13:45.784 "process": { 00:13:45.784 "type": "rebuild", 00:13:45.784 "target": "spare", 00:13:45.784 "progress": { 00:13:45.784 "blocks": 67584, 00:13:45.784 "percent": 51 00:13:45.784 } 00:13:45.784 }, 00:13:45.784 "base_bdevs_list": [ 00:13:45.784 { 00:13:45.784 "name": "spare", 00:13:45.784 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:45.784 "is_configured": true, 00:13:45.784 "data_offset": 0, 00:13:45.784 "data_size": 65536 00:13:45.784 }, 00:13:45.784 { 00:13:45.784 "name": "BaseBdev2", 00:13:45.784 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:45.784 "is_configured": true, 00:13:45.784 "data_offset": 0, 00:13:45.784 "data_size": 65536 00:13:45.784 }, 00:13:45.784 { 00:13:45.784 "name": "BaseBdev3", 00:13:45.784 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:45.784 "is_configured": true, 00:13:45.784 "data_offset": 0, 00:13:45.784 "data_size": 65536 00:13:45.784 } 00:13:45.784 ] 00:13:45.784 }' 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.784 05:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.166 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.166 "name": "raid_bdev1", 00:13:47.166 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:47.166 "strip_size_kb": 64, 00:13:47.166 "state": "online", 00:13:47.166 "raid_level": "raid5f", 00:13:47.166 "superblock": false, 00:13:47.166 "num_base_bdevs": 3, 00:13:47.166 "num_base_bdevs_discovered": 3, 00:13:47.166 "num_base_bdevs_operational": 3, 00:13:47.166 "process": { 00:13:47.166 "type": "rebuild", 00:13:47.166 "target": "spare", 00:13:47.166 "progress": { 00:13:47.166 "blocks": 92160, 00:13:47.166 "percent": 70 00:13:47.166 } 00:13:47.166 }, 00:13:47.166 "base_bdevs_list": [ 00:13:47.166 { 00:13:47.166 "name": "spare", 00:13:47.166 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:47.166 "is_configured": true, 00:13:47.166 "data_offset": 0, 00:13:47.166 "data_size": 65536 00:13:47.166 }, 00:13:47.166 { 00:13:47.166 "name": "BaseBdev2", 00:13:47.166 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:47.166 "is_configured": true, 00:13:47.166 "data_offset": 0, 00:13:47.166 "data_size": 65536 00:13:47.166 }, 00:13:47.166 { 00:13:47.166 "name": "BaseBdev3", 00:13:47.166 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:47.166 "is_configured": true, 00:13:47.166 "data_offset": 0, 00:13:47.167 "data_size": 65536 00:13:47.167 } 00:13:47.167 ] 00:13:47.167 }' 00:13:47.167 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.167 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.167 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.167 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.167 05:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.106 "name": "raid_bdev1", 00:13:48.106 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:48.106 "strip_size_kb": 64, 00:13:48.106 "state": "online", 00:13:48.106 "raid_level": "raid5f", 00:13:48.106 "superblock": false, 00:13:48.106 "num_base_bdevs": 3, 00:13:48.106 "num_base_bdevs_discovered": 3, 00:13:48.106 "num_base_bdevs_operational": 3, 00:13:48.106 "process": { 00:13:48.106 "type": "rebuild", 00:13:48.106 "target": "spare", 00:13:48.106 "progress": { 00:13:48.106 "blocks": 114688, 00:13:48.106 "percent": 87 00:13:48.106 } 00:13:48.106 }, 00:13:48.106 "base_bdevs_list": [ 00:13:48.106 { 00:13:48.106 "name": "spare", 00:13:48.106 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:48.106 "is_configured": true, 00:13:48.106 "data_offset": 0, 00:13:48.106 "data_size": 65536 00:13:48.106 }, 00:13:48.106 { 00:13:48.106 "name": "BaseBdev2", 00:13:48.106 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:48.106 "is_configured": true, 00:13:48.106 "data_offset": 0, 00:13:48.106 "data_size": 65536 00:13:48.106 }, 00:13:48.106 { 00:13:48.106 "name": "BaseBdev3", 00:13:48.106 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:48.106 "is_configured": true, 00:13:48.106 "data_offset": 0, 00:13:48.106 "data_size": 65536 00:13:48.106 } 00:13:48.106 ] 00:13:48.106 }' 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.106 05:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.676 [2024-11-21 05:00:05.361086] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:48.676 [2024-11-21 05:00:05.361227] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:48.676 [2024-11-21 05:00:05.361292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.246 "name": "raid_bdev1", 00:13:49.246 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:49.246 "strip_size_kb": 64, 00:13:49.246 "state": "online", 00:13:49.246 "raid_level": "raid5f", 00:13:49.246 "superblock": false, 00:13:49.246 "num_base_bdevs": 3, 00:13:49.246 "num_base_bdevs_discovered": 3, 00:13:49.246 "num_base_bdevs_operational": 3, 00:13:49.246 "base_bdevs_list": [ 00:13:49.246 { 00:13:49.246 "name": "spare", 00:13:49.246 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:49.246 "is_configured": true, 00:13:49.246 "data_offset": 0, 00:13:49.246 "data_size": 65536 00:13:49.246 }, 00:13:49.246 { 00:13:49.246 "name": "BaseBdev2", 00:13:49.246 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:49.246 "is_configured": true, 00:13:49.246 "data_offset": 0, 00:13:49.246 "data_size": 65536 00:13:49.246 }, 00:13:49.246 { 00:13:49.246 "name": "BaseBdev3", 00:13:49.246 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:49.246 "is_configured": true, 00:13:49.246 "data_offset": 0, 00:13:49.246 "data_size": 65536 00:13:49.246 } 00:13:49.246 ] 00:13:49.246 }' 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.246 05:00:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.516 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.516 "name": "raid_bdev1", 00:13:49.516 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:49.516 "strip_size_kb": 64, 00:13:49.516 "state": "online", 00:13:49.516 "raid_level": "raid5f", 00:13:49.516 "superblock": false, 00:13:49.516 "num_base_bdevs": 3, 00:13:49.516 "num_base_bdevs_discovered": 3, 00:13:49.516 "num_base_bdevs_operational": 3, 00:13:49.516 "base_bdevs_list": [ 00:13:49.516 { 00:13:49.516 "name": "spare", 00:13:49.516 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 }, 00:13:49.516 { 00:13:49.516 "name": "BaseBdev2", 00:13:49.516 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 }, 00:13:49.516 { 00:13:49.516 "name": "BaseBdev3", 00:13:49.516 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 } 00:13:49.516 ] 00:13:49.516 }' 00:13:49.516 05:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.516 "name": "raid_bdev1", 00:13:49.516 "uuid": "1e58db3b-d5b4-4077-af5d-50d9f11b627a", 00:13:49.516 "strip_size_kb": 64, 00:13:49.516 "state": "online", 00:13:49.516 "raid_level": "raid5f", 00:13:49.516 "superblock": false, 00:13:49.516 "num_base_bdevs": 3, 00:13:49.516 "num_base_bdevs_discovered": 3, 00:13:49.516 "num_base_bdevs_operational": 3, 00:13:49.516 "base_bdevs_list": [ 00:13:49.516 { 00:13:49.516 "name": "spare", 00:13:49.516 "uuid": "c9e4b050-bf0a-5c82-98f6-89ac73e25bf0", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 }, 00:13:49.516 { 00:13:49.516 "name": "BaseBdev2", 00:13:49.516 "uuid": "9bc8ed36-764d-5f7d-ba60-137ea6fdc816", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 }, 00:13:49.516 { 00:13:49.516 "name": "BaseBdev3", 00:13:49.516 "uuid": "dfc6fb6e-5e5c-5dab-9170-a3cb45c03c90", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 0, 00:13:49.516 "data_size": 65536 00:13:49.516 } 00:13:49.516 ] 00:13:49.516 }' 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.516 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.792 [2024-11-21 05:00:06.460931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.792 [2024-11-21 05:00:06.460964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.792 [2024-11-21 05:00:06.461045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.792 [2024-11-21 05:00:06.461178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.792 [2024-11-21 05:00:06.461190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:49.792 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:50.052 /dev/nbd0 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.052 1+0 records in 00:13:50.052 1+0 records out 00:13:50.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349348 s, 11.7 MB/s 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.052 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:50.311 /dev/nbd1 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.311 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.312 1+0 records in 00:13:50.312 1+0 records out 00:13:50.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278363 s, 14.7 MB/s 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.312 05:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:50.312 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.312 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.312 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.571 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92218 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 92218 ']' 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 92218 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92218 00:13:50.831 killing process with pid 92218 00:13:50.831 Received shutdown signal, test time was about 60.000000 seconds 00:13:50.831 00:13:50.831 Latency(us) 00:13:50.831 [2024-11-21T05:00:07.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.831 [2024-11-21T05:00:07.566Z] =================================================================================================================== 00:13:50.831 [2024-11-21T05:00:07.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92218' 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 92218 00:13:50.831 [2024-11-21 05:00:07.475809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.831 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 92218 00:13:50.831 [2024-11-21 05:00:07.517521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.091 ************************************ 00:13:51.091 END TEST raid5f_rebuild_test 00:13:51.091 ************************************ 00:13:51.091 05:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.091 00:13:51.091 real 0m13.384s 00:13:51.091 user 0m16.745s 00:13:51.091 sys 0m1.877s 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.092 05:00:07 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:51.092 05:00:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.092 05:00:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.092 05:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.092 ************************************ 00:13:51.092 START TEST raid5f_rebuild_test_sb 00:13:51.092 ************************************ 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92637 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92637 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92637 ']' 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.092 05:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.352 [2024-11-21 05:00:07.885334] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:51.352 [2024-11-21 05:00:07.885531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92637 ] 00:13:51.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.352 Zero copy mechanism will not be used. 00:13:51.352 [2024-11-21 05:00:08.033598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.352 [2024-11-21 05:00:08.058420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.612 [2024-11-21 05:00:08.100193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.612 [2024-11-21 05:00:08.100305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.182 BaseBdev1_malloc 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.182 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.182 [2024-11-21 05:00:08.746293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.183 [2024-11-21 05:00:08.746410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.183 [2024-11-21 05:00:08.746457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.183 [2024-11-21 05:00:08.746489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.183 [2024-11-21 05:00:08.748768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.183 [2024-11-21 05:00:08.748841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.183 BaseBdev1 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 BaseBdev2_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 [2024-11-21 05:00:08.771055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.183 [2024-11-21 05:00:08.771122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.183 [2024-11-21 05:00:08.771145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.183 [2024-11-21 05:00:08.771153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.183 [2024-11-21 05:00:08.773303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.183 [2024-11-21 05:00:08.773336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.183 BaseBdev2 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 BaseBdev3_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 [2024-11-21 05:00:08.799636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.183 [2024-11-21 05:00:08.799726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.183 [2024-11-21 05:00:08.799766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.183 [2024-11-21 05:00:08.799794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.183 [2024-11-21 05:00:08.801845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.183 [2024-11-21 05:00:08.801915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.183 BaseBdev3 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 spare_malloc 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 spare_delay 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 [2024-11-21 05:00:08.841386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.183 [2024-11-21 05:00:08.841464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.183 [2024-11-21 05:00:08.841490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:52.183 [2024-11-21 05:00:08.841498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.183 [2024-11-21 05:00:08.843548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.183 [2024-11-21 05:00:08.843585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.183 spare 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 [2024-11-21 05:00:08.849428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.183 [2024-11-21 05:00:08.851152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.183 [2024-11-21 05:00:08.851212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.183 [2024-11-21 05:00:08.851374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:52.183 [2024-11-21 05:00:08.851393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:52.183 [2024-11-21 05:00:08.851644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:52.183 [2024-11-21 05:00:08.852039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:52.183 [2024-11-21 05:00:08.852051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:52.183 [2024-11-21 05:00:08.852248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.183 "name": "raid_bdev1", 00:13:52.183 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:52.183 "strip_size_kb": 64, 00:13:52.183 "state": "online", 00:13:52.183 "raid_level": "raid5f", 00:13:52.183 "superblock": true, 00:13:52.183 "num_base_bdevs": 3, 00:13:52.183 "num_base_bdevs_discovered": 3, 00:13:52.183 "num_base_bdevs_operational": 3, 00:13:52.183 "base_bdevs_list": [ 00:13:52.183 { 00:13:52.183 "name": "BaseBdev1", 00:13:52.183 "uuid": "5a0cb979-2737-507a-a95c-527f8c06eca5", 00:13:52.183 "is_configured": true, 00:13:52.183 "data_offset": 2048, 00:13:52.183 "data_size": 63488 00:13:52.183 }, 00:13:52.183 { 00:13:52.183 "name": "BaseBdev2", 00:13:52.183 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:52.183 "is_configured": true, 00:13:52.183 "data_offset": 2048, 00:13:52.183 "data_size": 63488 00:13:52.183 }, 00:13:52.183 { 00:13:52.183 "name": "BaseBdev3", 00:13:52.183 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:52.183 "is_configured": true, 00:13:52.183 "data_offset": 2048, 00:13:52.183 "data_size": 63488 00:13:52.183 } 00:13:52.183 ] 00:13:52.183 }' 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.183 05:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.751 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.751 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.751 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.751 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.751 [2024-11-21 05:00:09.281148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.752 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:53.012 [2024-11-21 05:00:09.548566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:53.012 /dev/nbd0 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.012 1+0 records in 00:13:53.012 1+0 records out 00:13:53.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309577 s, 13.2 MB/s 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:53.012 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:53.272 496+0 records in 00:13:53.272 496+0 records out 00:13:53.272 65011712 bytes (65 MB, 62 MiB) copied, 0.27569 s, 236 MB/s 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.272 05:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.532 [2024-11-21 05:00:10.092131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.532 [2024-11-21 05:00:10.103179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.532 "name": "raid_bdev1", 00:13:53.532 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:53.532 "strip_size_kb": 64, 00:13:53.532 "state": "online", 00:13:53.532 "raid_level": "raid5f", 00:13:53.532 "superblock": true, 00:13:53.532 "num_base_bdevs": 3, 00:13:53.532 "num_base_bdevs_discovered": 2, 00:13:53.532 "num_base_bdevs_operational": 2, 00:13:53.532 "base_bdevs_list": [ 00:13:53.532 { 00:13:53.532 "name": null, 00:13:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.532 "is_configured": false, 00:13:53.532 "data_offset": 0, 00:13:53.532 "data_size": 63488 00:13:53.532 }, 00:13:53.532 { 00:13:53.532 "name": "BaseBdev2", 00:13:53.532 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:53.532 "is_configured": true, 00:13:53.532 "data_offset": 2048, 00:13:53.532 "data_size": 63488 00:13:53.532 }, 00:13:53.532 { 00:13:53.532 "name": "BaseBdev3", 00:13:53.532 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:53.532 "is_configured": true, 00:13:53.532 "data_offset": 2048, 00:13:53.532 "data_size": 63488 00:13:53.532 } 00:13:53.532 ] 00:13:53.532 }' 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.532 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.100 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.100 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 [2024-11-21 05:00:10.566391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.100 [2024-11-21 05:00:10.571102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:54.100 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.100 05:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:54.100 [2024-11-21 05:00:10.573336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.040 "name": "raid_bdev1", 00:13:55.040 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:55.040 "strip_size_kb": 64, 00:13:55.040 "state": "online", 00:13:55.040 "raid_level": "raid5f", 00:13:55.040 "superblock": true, 00:13:55.040 "num_base_bdevs": 3, 00:13:55.040 "num_base_bdevs_discovered": 3, 00:13:55.040 "num_base_bdevs_operational": 3, 00:13:55.040 "process": { 00:13:55.040 "type": "rebuild", 00:13:55.040 "target": "spare", 00:13:55.040 "progress": { 00:13:55.040 "blocks": 20480, 00:13:55.040 "percent": 16 00:13:55.040 } 00:13:55.040 }, 00:13:55.040 "base_bdevs_list": [ 00:13:55.040 { 00:13:55.040 "name": "spare", 00:13:55.040 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:13:55.040 "is_configured": true, 00:13:55.040 "data_offset": 2048, 00:13:55.040 "data_size": 63488 00:13:55.040 }, 00:13:55.040 { 00:13:55.040 "name": "BaseBdev2", 00:13:55.040 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:55.040 "is_configured": true, 00:13:55.040 "data_offset": 2048, 00:13:55.040 "data_size": 63488 00:13:55.040 }, 00:13:55.040 { 00:13:55.040 "name": "BaseBdev3", 00:13:55.040 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:55.040 "is_configured": true, 00:13:55.040 "data_offset": 2048, 00:13:55.040 "data_size": 63488 00:13:55.040 } 00:13:55.040 ] 00:13:55.040 }' 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.040 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 [2024-11-21 05:00:11.705518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.300 [2024-11-21 05:00:11.780540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.300 [2024-11-21 05:00:11.780674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.300 [2024-11-21 05:00:11.780715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.300 [2024-11-21 05:00:11.780740] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.300 "name": "raid_bdev1", 00:13:55.300 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:55.300 "strip_size_kb": 64, 00:13:55.300 "state": "online", 00:13:55.300 "raid_level": "raid5f", 00:13:55.300 "superblock": true, 00:13:55.300 "num_base_bdevs": 3, 00:13:55.300 "num_base_bdevs_discovered": 2, 00:13:55.300 "num_base_bdevs_operational": 2, 00:13:55.300 "base_bdevs_list": [ 00:13:55.300 { 00:13:55.300 "name": null, 00:13:55.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.300 "is_configured": false, 00:13:55.300 "data_offset": 0, 00:13:55.300 "data_size": 63488 00:13:55.300 }, 00:13:55.300 { 00:13:55.300 "name": "BaseBdev2", 00:13:55.300 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:55.300 "is_configured": true, 00:13:55.300 "data_offset": 2048, 00:13:55.300 "data_size": 63488 00:13:55.300 }, 00:13:55.300 { 00:13:55.300 "name": "BaseBdev3", 00:13:55.300 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:55.300 "is_configured": true, 00:13:55.300 "data_offset": 2048, 00:13:55.300 "data_size": 63488 00:13:55.300 } 00:13:55.300 ] 00:13:55.300 }' 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.300 05:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.561 "name": "raid_bdev1", 00:13:55.561 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:55.561 "strip_size_kb": 64, 00:13:55.561 "state": "online", 00:13:55.561 "raid_level": "raid5f", 00:13:55.561 "superblock": true, 00:13:55.561 "num_base_bdevs": 3, 00:13:55.561 "num_base_bdevs_discovered": 2, 00:13:55.561 "num_base_bdevs_operational": 2, 00:13:55.561 "base_bdevs_list": [ 00:13:55.561 { 00:13:55.561 "name": null, 00:13:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.561 "is_configured": false, 00:13:55.561 "data_offset": 0, 00:13:55.561 "data_size": 63488 00:13:55.561 }, 00:13:55.561 { 00:13:55.561 "name": "BaseBdev2", 00:13:55.561 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:55.561 "is_configured": true, 00:13:55.561 "data_offset": 2048, 00:13:55.561 "data_size": 63488 00:13:55.561 }, 00:13:55.561 { 00:13:55.561 "name": "BaseBdev3", 00:13:55.561 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:55.561 "is_configured": true, 00:13:55.561 "data_offset": 2048, 00:13:55.561 "data_size": 63488 00:13:55.561 } 00:13:55.561 ] 00:13:55.561 }' 00:13:55.561 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.821 [2024-11-21 05:00:12.353788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.821 [2024-11-21 05:00:12.358651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.821 05:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:55.821 [2024-11-21 05:00:12.361052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.761 "name": "raid_bdev1", 00:13:56.761 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:56.761 "strip_size_kb": 64, 00:13:56.761 "state": "online", 00:13:56.761 "raid_level": "raid5f", 00:13:56.761 "superblock": true, 00:13:56.761 "num_base_bdevs": 3, 00:13:56.761 "num_base_bdevs_discovered": 3, 00:13:56.761 "num_base_bdevs_operational": 3, 00:13:56.761 "process": { 00:13:56.761 "type": "rebuild", 00:13:56.761 "target": "spare", 00:13:56.761 "progress": { 00:13:56.761 "blocks": 20480, 00:13:56.761 "percent": 16 00:13:56.761 } 00:13:56.761 }, 00:13:56.761 "base_bdevs_list": [ 00:13:56.761 { 00:13:56.761 "name": "spare", 00:13:56.761 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:13:56.761 "is_configured": true, 00:13:56.761 "data_offset": 2048, 00:13:56.761 "data_size": 63488 00:13:56.761 }, 00:13:56.761 { 00:13:56.761 "name": "BaseBdev2", 00:13:56.761 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:56.761 "is_configured": true, 00:13:56.761 "data_offset": 2048, 00:13:56.761 "data_size": 63488 00:13:56.761 }, 00:13:56.761 { 00:13:56.761 "name": "BaseBdev3", 00:13:56.761 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:56.761 "is_configured": true, 00:13:56.761 "data_offset": 2048, 00:13:56.761 "data_size": 63488 00:13:56.761 } 00:13:56.761 ] 00:13:56.761 }' 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.761 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:57.021 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=465 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.021 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.021 "name": "raid_bdev1", 00:13:57.021 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:57.021 "strip_size_kb": 64, 00:13:57.021 "state": "online", 00:13:57.021 "raid_level": "raid5f", 00:13:57.021 "superblock": true, 00:13:57.021 "num_base_bdevs": 3, 00:13:57.021 "num_base_bdevs_discovered": 3, 00:13:57.021 "num_base_bdevs_operational": 3, 00:13:57.021 "process": { 00:13:57.021 "type": "rebuild", 00:13:57.021 "target": "spare", 00:13:57.021 "progress": { 00:13:57.021 "blocks": 22528, 00:13:57.021 "percent": 17 00:13:57.021 } 00:13:57.021 }, 00:13:57.021 "base_bdevs_list": [ 00:13:57.021 { 00:13:57.022 "name": "spare", 00:13:57.022 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:13:57.022 "is_configured": true, 00:13:57.022 "data_offset": 2048, 00:13:57.022 "data_size": 63488 00:13:57.022 }, 00:13:57.022 { 00:13:57.022 "name": "BaseBdev2", 00:13:57.022 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:57.022 "is_configured": true, 00:13:57.022 "data_offset": 2048, 00:13:57.022 "data_size": 63488 00:13:57.022 }, 00:13:57.022 { 00:13:57.022 "name": "BaseBdev3", 00:13:57.022 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:57.022 "is_configured": true, 00:13:57.022 "data_offset": 2048, 00:13:57.022 "data_size": 63488 00:13:57.022 } 00:13:57.022 ] 00:13:57.022 }' 00:13:57.022 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.022 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.022 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.022 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.022 05:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.961 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.962 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.962 "name": "raid_bdev1", 00:13:57.962 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:57.962 "strip_size_kb": 64, 00:13:57.962 "state": "online", 00:13:57.962 "raid_level": "raid5f", 00:13:57.962 "superblock": true, 00:13:57.962 "num_base_bdevs": 3, 00:13:57.962 "num_base_bdevs_discovered": 3, 00:13:57.962 "num_base_bdevs_operational": 3, 00:13:57.962 "process": { 00:13:57.962 "type": "rebuild", 00:13:57.962 "target": "spare", 00:13:57.962 "progress": { 00:13:57.962 "blocks": 45056, 00:13:57.962 "percent": 35 00:13:57.962 } 00:13:57.962 }, 00:13:57.962 "base_bdevs_list": [ 00:13:57.962 { 00:13:57.962 "name": "spare", 00:13:57.962 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:13:57.962 "is_configured": true, 00:13:57.962 "data_offset": 2048, 00:13:57.962 "data_size": 63488 00:13:57.962 }, 00:13:57.962 { 00:13:57.962 "name": "BaseBdev2", 00:13:57.962 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:57.962 "is_configured": true, 00:13:57.962 "data_offset": 2048, 00:13:57.962 "data_size": 63488 00:13:57.962 }, 00:13:57.962 { 00:13:57.962 "name": "BaseBdev3", 00:13:57.962 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:57.962 "is_configured": true, 00:13:57.962 "data_offset": 2048, 00:13:57.962 "data_size": 63488 00:13:57.962 } 00:13:57.962 ] 00:13:57.962 }' 00:13:57.962 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.222 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.222 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.222 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.222 05:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.161 "name": "raid_bdev1", 00:13:59.161 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:13:59.161 "strip_size_kb": 64, 00:13:59.161 "state": "online", 00:13:59.161 "raid_level": "raid5f", 00:13:59.161 "superblock": true, 00:13:59.161 "num_base_bdevs": 3, 00:13:59.161 "num_base_bdevs_discovered": 3, 00:13:59.161 "num_base_bdevs_operational": 3, 00:13:59.161 "process": { 00:13:59.161 "type": "rebuild", 00:13:59.161 "target": "spare", 00:13:59.161 "progress": { 00:13:59.161 "blocks": 69632, 00:13:59.161 "percent": 54 00:13:59.161 } 00:13:59.161 }, 00:13:59.161 "base_bdevs_list": [ 00:13:59.161 { 00:13:59.161 "name": "spare", 00:13:59.161 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:13:59.161 "is_configured": true, 00:13:59.161 "data_offset": 2048, 00:13:59.161 "data_size": 63488 00:13:59.161 }, 00:13:59.161 { 00:13:59.161 "name": "BaseBdev2", 00:13:59.161 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:13:59.161 "is_configured": true, 00:13:59.161 "data_offset": 2048, 00:13:59.161 "data_size": 63488 00:13:59.161 }, 00:13:59.161 { 00:13:59.161 "name": "BaseBdev3", 00:13:59.161 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:13:59.161 "is_configured": true, 00:13:59.161 "data_offset": 2048, 00:13:59.161 "data_size": 63488 00:13:59.161 } 00:13:59.161 ] 00:13:59.161 }' 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.161 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.422 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.422 05:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.362 "name": "raid_bdev1", 00:14:00.362 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:00.362 "strip_size_kb": 64, 00:14:00.362 "state": "online", 00:14:00.362 "raid_level": "raid5f", 00:14:00.362 "superblock": true, 00:14:00.362 "num_base_bdevs": 3, 00:14:00.362 "num_base_bdevs_discovered": 3, 00:14:00.362 "num_base_bdevs_operational": 3, 00:14:00.362 "process": { 00:14:00.362 "type": "rebuild", 00:14:00.362 "target": "spare", 00:14:00.362 "progress": { 00:14:00.362 "blocks": 92160, 00:14:00.362 "percent": 72 00:14:00.362 } 00:14:00.362 }, 00:14:00.362 "base_bdevs_list": [ 00:14:00.362 { 00:14:00.362 "name": "spare", 00:14:00.362 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:00.362 "is_configured": true, 00:14:00.362 "data_offset": 2048, 00:14:00.362 "data_size": 63488 00:14:00.362 }, 00:14:00.362 { 00:14:00.362 "name": "BaseBdev2", 00:14:00.362 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:00.362 "is_configured": true, 00:14:00.362 "data_offset": 2048, 00:14:00.362 "data_size": 63488 00:14:00.362 }, 00:14:00.362 { 00:14:00.362 "name": "BaseBdev3", 00:14:00.362 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:00.362 "is_configured": true, 00:14:00.362 "data_offset": 2048, 00:14:00.362 "data_size": 63488 00:14:00.362 } 00:14:00.362 ] 00:14:00.362 }' 00:14:00.362 05:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.362 05:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.362 05:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.362 05:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.362 05:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.744 "name": "raid_bdev1", 00:14:01.744 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:01.744 "strip_size_kb": 64, 00:14:01.744 "state": "online", 00:14:01.744 "raid_level": "raid5f", 00:14:01.744 "superblock": true, 00:14:01.744 "num_base_bdevs": 3, 00:14:01.744 "num_base_bdevs_discovered": 3, 00:14:01.744 "num_base_bdevs_operational": 3, 00:14:01.744 "process": { 00:14:01.744 "type": "rebuild", 00:14:01.744 "target": "spare", 00:14:01.744 "progress": { 00:14:01.744 "blocks": 114688, 00:14:01.744 "percent": 90 00:14:01.744 } 00:14:01.744 }, 00:14:01.744 "base_bdevs_list": [ 00:14:01.744 { 00:14:01.744 "name": "spare", 00:14:01.744 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:01.744 "is_configured": true, 00:14:01.744 "data_offset": 2048, 00:14:01.744 "data_size": 63488 00:14:01.744 }, 00:14:01.744 { 00:14:01.744 "name": "BaseBdev2", 00:14:01.744 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:01.744 "is_configured": true, 00:14:01.744 "data_offset": 2048, 00:14:01.744 "data_size": 63488 00:14:01.744 }, 00:14:01.744 { 00:14:01.744 "name": "BaseBdev3", 00:14:01.744 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:01.744 "is_configured": true, 00:14:01.744 "data_offset": 2048, 00:14:01.744 "data_size": 63488 00:14:01.744 } 00:14:01.744 ] 00:14:01.744 }' 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.744 05:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.004 [2024-11-21 05:00:18.595775] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:02.004 [2024-11-21 05:00:18.595882] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:02.005 [2024-11-21 05:00:18.595997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.584 "name": "raid_bdev1", 00:14:02.584 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:02.584 "strip_size_kb": 64, 00:14:02.584 "state": "online", 00:14:02.584 "raid_level": "raid5f", 00:14:02.584 "superblock": true, 00:14:02.584 "num_base_bdevs": 3, 00:14:02.584 "num_base_bdevs_discovered": 3, 00:14:02.584 "num_base_bdevs_operational": 3, 00:14:02.584 "base_bdevs_list": [ 00:14:02.584 { 00:14:02.584 "name": "spare", 00:14:02.584 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:02.584 "is_configured": true, 00:14:02.584 "data_offset": 2048, 00:14:02.584 "data_size": 63488 00:14:02.584 }, 00:14:02.584 { 00:14:02.584 "name": "BaseBdev2", 00:14:02.584 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:02.584 "is_configured": true, 00:14:02.584 "data_offset": 2048, 00:14:02.584 "data_size": 63488 00:14:02.584 }, 00:14:02.584 { 00:14:02.584 "name": "BaseBdev3", 00:14:02.584 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:02.584 "is_configured": true, 00:14:02.584 "data_offset": 2048, 00:14:02.584 "data_size": 63488 00:14:02.584 } 00:14:02.584 ] 00:14:02.584 }' 00:14:02.584 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.845 "name": "raid_bdev1", 00:14:02.845 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:02.845 "strip_size_kb": 64, 00:14:02.845 "state": "online", 00:14:02.845 "raid_level": "raid5f", 00:14:02.845 "superblock": true, 00:14:02.845 "num_base_bdevs": 3, 00:14:02.845 "num_base_bdevs_discovered": 3, 00:14:02.845 "num_base_bdevs_operational": 3, 00:14:02.845 "base_bdevs_list": [ 00:14:02.845 { 00:14:02.845 "name": "spare", 00:14:02.845 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 }, 00:14:02.845 { 00:14:02.845 "name": "BaseBdev2", 00:14:02.845 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 }, 00:14:02.845 { 00:14:02.845 "name": "BaseBdev3", 00:14:02.845 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 } 00:14:02.845 ] 00:14:02.845 }' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.845 "name": "raid_bdev1", 00:14:02.845 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:02.845 "strip_size_kb": 64, 00:14:02.845 "state": "online", 00:14:02.845 "raid_level": "raid5f", 00:14:02.845 "superblock": true, 00:14:02.845 "num_base_bdevs": 3, 00:14:02.845 "num_base_bdevs_discovered": 3, 00:14:02.845 "num_base_bdevs_operational": 3, 00:14:02.845 "base_bdevs_list": [ 00:14:02.845 { 00:14:02.845 "name": "spare", 00:14:02.845 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 }, 00:14:02.845 { 00:14:02.845 "name": "BaseBdev2", 00:14:02.845 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 }, 00:14:02.845 { 00:14:02.845 "name": "BaseBdev3", 00:14:02.845 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:02.845 "is_configured": true, 00:14:02.845 "data_offset": 2048, 00:14:02.845 "data_size": 63488 00:14:02.845 } 00:14:02.845 ] 00:14:02.845 }' 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.845 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 [2024-11-21 05:00:19.947168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.415 [2024-11-21 05:00:19.947243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.415 [2024-11-21 05:00:19.947383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.415 [2024-11-21 05:00:19.947518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.415 [2024-11-21 05:00:19.947569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.415 05:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:03.675 /dev/nbd0 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.675 1+0 records in 00:14:03.675 1+0 records out 00:14:03.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347377 s, 11.8 MB/s 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.675 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:03.936 /dev/nbd1 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.936 1+0 records in 00:14:03.936 1+0 records out 00:14:03.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00271576 s, 1.5 MB/s 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.936 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.196 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:04.456 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.457 [2024-11-21 05:00:20.975049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.457 [2024-11-21 05:00:20.975123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.457 [2024-11-21 05:00:20.975149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:04.457 [2024-11-21 05:00:20.975158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.457 [2024-11-21 05:00:20.977329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.457 [2024-11-21 05:00:20.977402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.457 [2024-11-21 05:00:20.977508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.457 [2024-11-21 05:00:20.977545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.457 [2024-11-21 05:00:20.977666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.457 [2024-11-21 05:00:20.977753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.457 spare 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.457 05:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.457 [2024-11-21 05:00:21.077638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:04.457 [2024-11-21 05:00:21.077662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:04.457 [2024-11-21 05:00:21.077906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:04.457 [2024-11-21 05:00:21.078345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:04.457 [2024-11-21 05:00:21.078361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:04.457 [2024-11-21 05:00:21.078488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.457 "name": "raid_bdev1", 00:14:04.457 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:04.457 "strip_size_kb": 64, 00:14:04.457 "state": "online", 00:14:04.457 "raid_level": "raid5f", 00:14:04.457 "superblock": true, 00:14:04.457 "num_base_bdevs": 3, 00:14:04.457 "num_base_bdevs_discovered": 3, 00:14:04.457 "num_base_bdevs_operational": 3, 00:14:04.457 "base_bdevs_list": [ 00:14:04.457 { 00:14:04.457 "name": "spare", 00:14:04.457 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:04.457 "is_configured": true, 00:14:04.457 "data_offset": 2048, 00:14:04.457 "data_size": 63488 00:14:04.457 }, 00:14:04.457 { 00:14:04.457 "name": "BaseBdev2", 00:14:04.457 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:04.457 "is_configured": true, 00:14:04.457 "data_offset": 2048, 00:14:04.457 "data_size": 63488 00:14:04.457 }, 00:14:04.457 { 00:14:04.457 "name": "BaseBdev3", 00:14:04.457 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:04.457 "is_configured": true, 00:14:04.457 "data_offset": 2048, 00:14:04.457 "data_size": 63488 00:14:04.457 } 00:14:04.457 ] 00:14:04.457 }' 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.457 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.028 "name": "raid_bdev1", 00:14:05.028 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:05.028 "strip_size_kb": 64, 00:14:05.028 "state": "online", 00:14:05.028 "raid_level": "raid5f", 00:14:05.028 "superblock": true, 00:14:05.028 "num_base_bdevs": 3, 00:14:05.028 "num_base_bdevs_discovered": 3, 00:14:05.028 "num_base_bdevs_operational": 3, 00:14:05.028 "base_bdevs_list": [ 00:14:05.028 { 00:14:05.028 "name": "spare", 00:14:05.028 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:05.028 "is_configured": true, 00:14:05.028 "data_offset": 2048, 00:14:05.028 "data_size": 63488 00:14:05.028 }, 00:14:05.028 { 00:14:05.028 "name": "BaseBdev2", 00:14:05.028 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:05.028 "is_configured": true, 00:14:05.028 "data_offset": 2048, 00:14:05.028 "data_size": 63488 00:14:05.028 }, 00:14:05.028 { 00:14:05.028 "name": "BaseBdev3", 00:14:05.028 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:05.028 "is_configured": true, 00:14:05.028 "data_offset": 2048, 00:14:05.028 "data_size": 63488 00:14:05.028 } 00:14:05.028 ] 00:14:05.028 }' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.028 [2024-11-21 05:00:21.706722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.028 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.029 "name": "raid_bdev1", 00:14:05.029 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:05.029 "strip_size_kb": 64, 00:14:05.029 "state": "online", 00:14:05.029 "raid_level": "raid5f", 00:14:05.029 "superblock": true, 00:14:05.029 "num_base_bdevs": 3, 00:14:05.029 "num_base_bdevs_discovered": 2, 00:14:05.029 "num_base_bdevs_operational": 2, 00:14:05.029 "base_bdevs_list": [ 00:14:05.029 { 00:14:05.029 "name": null, 00:14:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.029 "is_configured": false, 00:14:05.029 "data_offset": 0, 00:14:05.029 "data_size": 63488 00:14:05.029 }, 00:14:05.029 { 00:14:05.029 "name": "BaseBdev2", 00:14:05.029 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:05.029 "is_configured": true, 00:14:05.029 "data_offset": 2048, 00:14:05.029 "data_size": 63488 00:14:05.029 }, 00:14:05.029 { 00:14:05.029 "name": "BaseBdev3", 00:14:05.029 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:05.029 "is_configured": true, 00:14:05.029 "data_offset": 2048, 00:14:05.029 "data_size": 63488 00:14:05.029 } 00:14:05.029 ] 00:14:05.029 }' 00:14:05.029 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.289 05:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.548 05:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.548 05:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.548 05:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.548 [2024-11-21 05:00:22.133984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.548 [2024-11-21 05:00:22.134230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.548 [2024-11-21 05:00:22.134322] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.548 [2024-11-21 05:00:22.134392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.548 [2024-11-21 05:00:22.138758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:05.548 05:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.548 05:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:05.548 [2024-11-21 05:00:22.140966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.488 "name": "raid_bdev1", 00:14:06.488 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:06.488 "strip_size_kb": 64, 00:14:06.488 "state": "online", 00:14:06.488 "raid_level": "raid5f", 00:14:06.488 "superblock": true, 00:14:06.488 "num_base_bdevs": 3, 00:14:06.488 "num_base_bdevs_discovered": 3, 00:14:06.488 "num_base_bdevs_operational": 3, 00:14:06.488 "process": { 00:14:06.488 "type": "rebuild", 00:14:06.488 "target": "spare", 00:14:06.488 "progress": { 00:14:06.488 "blocks": 20480, 00:14:06.488 "percent": 16 00:14:06.488 } 00:14:06.488 }, 00:14:06.488 "base_bdevs_list": [ 00:14:06.488 { 00:14:06.488 "name": "spare", 00:14:06.488 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:06.488 "is_configured": true, 00:14:06.488 "data_offset": 2048, 00:14:06.488 "data_size": 63488 00:14:06.488 }, 00:14:06.488 { 00:14:06.488 "name": "BaseBdev2", 00:14:06.488 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:06.488 "is_configured": true, 00:14:06.488 "data_offset": 2048, 00:14:06.488 "data_size": 63488 00:14:06.488 }, 00:14:06.488 { 00:14:06.488 "name": "BaseBdev3", 00:14:06.488 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:06.488 "is_configured": true, 00:14:06.488 "data_offset": 2048, 00:14:06.488 "data_size": 63488 00:14:06.488 } 00:14:06.488 ] 00:14:06.488 }' 00:14:06.488 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.748 [2024-11-21 05:00:23.280949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.748 [2024-11-21 05:00:23.348362] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.748 [2024-11-21 05:00:23.348417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.748 [2024-11-21 05:00:23.348452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.748 [2024-11-21 05:00:23.348460] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.748 "name": "raid_bdev1", 00:14:06.748 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:06.748 "strip_size_kb": 64, 00:14:06.748 "state": "online", 00:14:06.748 "raid_level": "raid5f", 00:14:06.748 "superblock": true, 00:14:06.748 "num_base_bdevs": 3, 00:14:06.748 "num_base_bdevs_discovered": 2, 00:14:06.748 "num_base_bdevs_operational": 2, 00:14:06.748 "base_bdevs_list": [ 00:14:06.748 { 00:14:06.748 "name": null, 00:14:06.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.748 "is_configured": false, 00:14:06.748 "data_offset": 0, 00:14:06.748 "data_size": 63488 00:14:06.748 }, 00:14:06.748 { 00:14:06.748 "name": "BaseBdev2", 00:14:06.748 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:06.748 "is_configured": true, 00:14:06.748 "data_offset": 2048, 00:14:06.748 "data_size": 63488 00:14:06.748 }, 00:14:06.748 { 00:14:06.748 "name": "BaseBdev3", 00:14:06.748 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:06.748 "is_configured": true, 00:14:06.748 "data_offset": 2048, 00:14:06.748 "data_size": 63488 00:14:06.748 } 00:14:06.748 ] 00:14:06.748 }' 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.748 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.359 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.359 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.359 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.359 [2024-11-21 05:00:23.821112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.359 [2024-11-21 05:00:23.821232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.359 [2024-11-21 05:00:23.821274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:07.359 [2024-11-21 05:00:23.821301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.359 [2024-11-21 05:00:23.821793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.359 [2024-11-21 05:00:23.821851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.359 [2024-11-21 05:00:23.821992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.359 [2024-11-21 05:00:23.822033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.359 [2024-11-21 05:00:23.822127] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.359 [2024-11-21 05:00:23.822178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.359 [2024-11-21 05:00:23.826569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:07.359 spare 00:14:07.359 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.359 05:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:07.359 [2024-11-21 05:00:23.828767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.297 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.298 "name": "raid_bdev1", 00:14:08.298 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:08.298 "strip_size_kb": 64, 00:14:08.298 "state": "online", 00:14:08.298 "raid_level": "raid5f", 00:14:08.298 "superblock": true, 00:14:08.298 "num_base_bdevs": 3, 00:14:08.298 "num_base_bdevs_discovered": 3, 00:14:08.298 "num_base_bdevs_operational": 3, 00:14:08.298 "process": { 00:14:08.298 "type": "rebuild", 00:14:08.298 "target": "spare", 00:14:08.298 "progress": { 00:14:08.298 "blocks": 20480, 00:14:08.298 "percent": 16 00:14:08.298 } 00:14:08.298 }, 00:14:08.298 "base_bdevs_list": [ 00:14:08.298 { 00:14:08.298 "name": "spare", 00:14:08.298 "uuid": "e26c9383-98e4-5588-9d15-a3c149e6889c", 00:14:08.298 "is_configured": true, 00:14:08.298 "data_offset": 2048, 00:14:08.298 "data_size": 63488 00:14:08.298 }, 00:14:08.298 { 00:14:08.298 "name": "BaseBdev2", 00:14:08.298 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:08.298 "is_configured": true, 00:14:08.298 "data_offset": 2048, 00:14:08.298 "data_size": 63488 00:14:08.298 }, 00:14:08.298 { 00:14:08.298 "name": "BaseBdev3", 00:14:08.298 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:08.298 "is_configured": true, 00:14:08.298 "data_offset": 2048, 00:14:08.298 "data_size": 63488 00:14:08.298 } 00:14:08.298 ] 00:14:08.298 }' 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.298 05:00:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 [2024-11-21 05:00:24.988733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.558 [2024-11-21 05:00:25.035555] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.558 [2024-11-21 05:00:25.035667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.558 [2024-11-21 05:00:25.035706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.558 [2024-11-21 05:00:25.035733] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.558 "name": "raid_bdev1", 00:14:08.558 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:08.558 "strip_size_kb": 64, 00:14:08.558 "state": "online", 00:14:08.558 "raid_level": "raid5f", 00:14:08.558 "superblock": true, 00:14:08.558 "num_base_bdevs": 3, 00:14:08.558 "num_base_bdevs_discovered": 2, 00:14:08.558 "num_base_bdevs_operational": 2, 00:14:08.558 "base_bdevs_list": [ 00:14:08.558 { 00:14:08.558 "name": null, 00:14:08.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.558 "is_configured": false, 00:14:08.558 "data_offset": 0, 00:14:08.558 "data_size": 63488 00:14:08.558 }, 00:14:08.558 { 00:14:08.558 "name": "BaseBdev2", 00:14:08.558 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:08.558 "is_configured": true, 00:14:08.558 "data_offset": 2048, 00:14:08.558 "data_size": 63488 00:14:08.558 }, 00:14:08.558 { 00:14:08.558 "name": "BaseBdev3", 00:14:08.558 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:08.558 "is_configured": true, 00:14:08.558 "data_offset": 2048, 00:14:08.558 "data_size": 63488 00:14:08.558 } 00:14:08.558 ] 00:14:08.558 }' 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.558 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.818 "name": "raid_bdev1", 00:14:08.818 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:08.818 "strip_size_kb": 64, 00:14:08.818 "state": "online", 00:14:08.818 "raid_level": "raid5f", 00:14:08.818 "superblock": true, 00:14:08.818 "num_base_bdevs": 3, 00:14:08.818 "num_base_bdevs_discovered": 2, 00:14:08.818 "num_base_bdevs_operational": 2, 00:14:08.818 "base_bdevs_list": [ 00:14:08.818 { 00:14:08.818 "name": null, 00:14:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.818 "is_configured": false, 00:14:08.818 "data_offset": 0, 00:14:08.818 "data_size": 63488 00:14:08.818 }, 00:14:08.818 { 00:14:08.818 "name": "BaseBdev2", 00:14:08.818 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:08.818 "is_configured": true, 00:14:08.818 "data_offset": 2048, 00:14:08.818 "data_size": 63488 00:14:08.818 }, 00:14:08.818 { 00:14:08.818 "name": "BaseBdev3", 00:14:08.818 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:08.818 "is_configured": true, 00:14:08.818 "data_offset": 2048, 00:14:08.818 "data_size": 63488 00:14:08.818 } 00:14:08.818 ] 00:14:08.818 }' 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.818 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.078 [2024-11-21 05:00:25.624307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.078 [2024-11-21 05:00:25.624364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.078 [2024-11-21 05:00:25.624388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:09.078 [2024-11-21 05:00:25.624399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.078 [2024-11-21 05:00:25.624784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.078 [2024-11-21 05:00:25.624804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.078 [2024-11-21 05:00:25.624870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:09.078 [2024-11-21 05:00:25.624886] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.078 [2024-11-21 05:00:25.624893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.078 [2024-11-21 05:00:25.624905] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:09.078 BaseBdev1 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.078 05:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.017 "name": "raid_bdev1", 00:14:10.017 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:10.017 "strip_size_kb": 64, 00:14:10.017 "state": "online", 00:14:10.017 "raid_level": "raid5f", 00:14:10.017 "superblock": true, 00:14:10.017 "num_base_bdevs": 3, 00:14:10.017 "num_base_bdevs_discovered": 2, 00:14:10.017 "num_base_bdevs_operational": 2, 00:14:10.017 "base_bdevs_list": [ 00:14:10.017 { 00:14:10.017 "name": null, 00:14:10.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.017 "is_configured": false, 00:14:10.017 "data_offset": 0, 00:14:10.017 "data_size": 63488 00:14:10.017 }, 00:14:10.017 { 00:14:10.017 "name": "BaseBdev2", 00:14:10.017 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:10.017 "is_configured": true, 00:14:10.017 "data_offset": 2048, 00:14:10.017 "data_size": 63488 00:14:10.017 }, 00:14:10.017 { 00:14:10.017 "name": "BaseBdev3", 00:14:10.017 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:10.017 "is_configured": true, 00:14:10.017 "data_offset": 2048, 00:14:10.017 "data_size": 63488 00:14:10.017 } 00:14:10.017 ] 00:14:10.017 }' 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.017 05:00:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.586 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.586 "name": "raid_bdev1", 00:14:10.586 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:10.586 "strip_size_kb": 64, 00:14:10.586 "state": "online", 00:14:10.586 "raid_level": "raid5f", 00:14:10.586 "superblock": true, 00:14:10.586 "num_base_bdevs": 3, 00:14:10.587 "num_base_bdevs_discovered": 2, 00:14:10.587 "num_base_bdevs_operational": 2, 00:14:10.587 "base_bdevs_list": [ 00:14:10.587 { 00:14:10.587 "name": null, 00:14:10.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.587 "is_configured": false, 00:14:10.587 "data_offset": 0, 00:14:10.587 "data_size": 63488 00:14:10.587 }, 00:14:10.587 { 00:14:10.587 "name": "BaseBdev2", 00:14:10.587 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:10.587 "is_configured": true, 00:14:10.587 "data_offset": 2048, 00:14:10.587 "data_size": 63488 00:14:10.587 }, 00:14:10.587 { 00:14:10.587 "name": "BaseBdev3", 00:14:10.587 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:10.587 "is_configured": true, 00:14:10.587 "data_offset": 2048, 00:14:10.587 "data_size": 63488 00:14:10.587 } 00:14:10.587 ] 00:14:10.587 }' 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.587 [2024-11-21 05:00:27.213667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.587 [2024-11-21 05:00:27.213896] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.587 [2024-11-21 05:00:27.213944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:10.587 request: 00:14:10.587 { 00:14:10.587 "base_bdev": "BaseBdev1", 00:14:10.587 "raid_bdev": "raid_bdev1", 00:14:10.587 "method": "bdev_raid_add_base_bdev", 00:14:10.587 "req_id": 1 00:14:10.587 } 00:14:10.587 Got JSON-RPC error response 00:14:10.587 response: 00:14:10.587 { 00:14:10.587 "code": -22, 00:14:10.587 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:10.587 } 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.587 05:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.666 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.667 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.667 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.667 "name": "raid_bdev1", 00:14:11.667 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:11.667 "strip_size_kb": 64, 00:14:11.667 "state": "online", 00:14:11.667 "raid_level": "raid5f", 00:14:11.667 "superblock": true, 00:14:11.667 "num_base_bdevs": 3, 00:14:11.667 "num_base_bdevs_discovered": 2, 00:14:11.667 "num_base_bdevs_operational": 2, 00:14:11.667 "base_bdevs_list": [ 00:14:11.667 { 00:14:11.667 "name": null, 00:14:11.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.667 "is_configured": false, 00:14:11.667 "data_offset": 0, 00:14:11.667 "data_size": 63488 00:14:11.667 }, 00:14:11.667 { 00:14:11.667 "name": "BaseBdev2", 00:14:11.667 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:11.667 "is_configured": true, 00:14:11.667 "data_offset": 2048, 00:14:11.667 "data_size": 63488 00:14:11.667 }, 00:14:11.667 { 00:14:11.667 "name": "BaseBdev3", 00:14:11.667 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:11.667 "is_configured": true, 00:14:11.667 "data_offset": 2048, 00:14:11.667 "data_size": 63488 00:14:11.667 } 00:14:11.667 ] 00:14:11.667 }' 00:14:11.667 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.667 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.938 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.939 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.198 "name": "raid_bdev1", 00:14:12.198 "uuid": "276d9b98-3fbf-4716-92d0-bf95bfaeca06", 00:14:12.198 "strip_size_kb": 64, 00:14:12.198 "state": "online", 00:14:12.198 "raid_level": "raid5f", 00:14:12.198 "superblock": true, 00:14:12.198 "num_base_bdevs": 3, 00:14:12.198 "num_base_bdevs_discovered": 2, 00:14:12.198 "num_base_bdevs_operational": 2, 00:14:12.198 "base_bdevs_list": [ 00:14:12.198 { 00:14:12.198 "name": null, 00:14:12.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.198 "is_configured": false, 00:14:12.198 "data_offset": 0, 00:14:12.198 "data_size": 63488 00:14:12.198 }, 00:14:12.198 { 00:14:12.198 "name": "BaseBdev2", 00:14:12.198 "uuid": "03ebf0bb-28a5-55e5-8561-80ecb638c600", 00:14:12.198 "is_configured": true, 00:14:12.198 "data_offset": 2048, 00:14:12.198 "data_size": 63488 00:14:12.198 }, 00:14:12.198 { 00:14:12.198 "name": "BaseBdev3", 00:14:12.198 "uuid": "05adf38b-669f-5f03-84d9-1bb178aab0a8", 00:14:12.198 "is_configured": true, 00:14:12.198 "data_offset": 2048, 00:14:12.198 "data_size": 63488 00:14:12.198 } 00:14:12.198 ] 00:14:12.198 }' 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92637 00:14:12.198 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92637 ']' 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 92637 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92637 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.199 killing process with pid 92637 00:14:12.199 Received shutdown signal, test time was about 60.000000 seconds 00:14:12.199 00:14:12.199 Latency(us) 00:14:12.199 [2024-11-21T05:00:28.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.199 [2024-11-21T05:00:28.934Z] =================================================================================================================== 00:14:12.199 [2024-11-21T05:00:28.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92637' 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 92637 00:14:12.199 [2024-11-21 05:00:28.824492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.199 [2024-11-21 05:00:28.824611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.199 05:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 92637 00:14:12.199 [2024-11-21 05:00:28.824685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.199 [2024-11-21 05:00:28.824695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:12.199 [2024-11-21 05:00:28.866656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.458 05:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.458 ************************************ 00:14:12.458 END TEST raid5f_rebuild_test_sb 00:14:12.458 ************************************ 00:14:12.458 00:14:12.458 real 0m21.274s 00:14:12.458 user 0m27.639s 00:14:12.458 sys 0m2.568s 00:14:12.459 05:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.459 05:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.459 05:00:29 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:12.459 05:00:29 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:12.459 05:00:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:12.459 05:00:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.459 05:00:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.459 ************************************ 00:14:12.459 START TEST raid5f_state_function_test 00:14:12.459 ************************************ 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93374 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93374' 00:14:12.459 Process raid pid: 93374 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93374 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 93374 ']' 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.459 05:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.718 [2024-11-21 05:00:29.230336] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:12.718 [2024-11-21 05:00:29.230571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.718 [2024-11-21 05:00:29.400356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.718 [2024-11-21 05:00:29.426247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.977 [2024-11-21 05:00:29.468529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.977 [2024-11-21 05:00:29.468655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.548 [2024-11-21 05:00:30.049633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.548 [2024-11-21 05:00:30.049725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.548 [2024-11-21 05:00:30.049765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.548 [2024-11-21 05:00:30.049790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.548 [2024-11-21 05:00:30.049808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.548 [2024-11-21 05:00:30.049830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.548 [2024-11-21 05:00:30.049847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:13.548 [2024-11-21 05:00:30.049916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.548 "name": "Existed_Raid", 00:14:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.548 "strip_size_kb": 64, 00:14:13.548 "state": "configuring", 00:14:13.548 "raid_level": "raid5f", 00:14:13.548 "superblock": false, 00:14:13.548 "num_base_bdevs": 4, 00:14:13.548 "num_base_bdevs_discovered": 0, 00:14:13.548 "num_base_bdevs_operational": 4, 00:14:13.548 "base_bdevs_list": [ 00:14:13.548 { 00:14:13.548 "name": "BaseBdev1", 00:14:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.548 "is_configured": false, 00:14:13.548 "data_offset": 0, 00:14:13.548 "data_size": 0 00:14:13.548 }, 00:14:13.548 { 00:14:13.548 "name": "BaseBdev2", 00:14:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.548 "is_configured": false, 00:14:13.548 "data_offset": 0, 00:14:13.548 "data_size": 0 00:14:13.548 }, 00:14:13.548 { 00:14:13.548 "name": "BaseBdev3", 00:14:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.548 "is_configured": false, 00:14:13.548 "data_offset": 0, 00:14:13.548 "data_size": 0 00:14:13.548 }, 00:14:13.548 { 00:14:13.548 "name": "BaseBdev4", 00:14:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.548 "is_configured": false, 00:14:13.548 "data_offset": 0, 00:14:13.548 "data_size": 0 00:14:13.548 } 00:14:13.548 ] 00:14:13.548 }' 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.548 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.808 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 [2024-11-21 05:00:30.464860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.809 [2024-11-21 05:00:30.464941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 [2024-11-21 05:00:30.472848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.809 [2024-11-21 05:00:30.472929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.809 [2024-11-21 05:00:30.472957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.809 [2024-11-21 05:00:30.472980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.809 [2024-11-21 05:00:30.472997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.809 [2024-11-21 05:00:30.473016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.809 [2024-11-21 05:00:30.473033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:13.809 [2024-11-21 05:00:30.473052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 [2024-11-21 05:00:30.489954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.809 BaseBdev1 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 [ 00:14:13.809 { 00:14:13.809 "name": "BaseBdev1", 00:14:13.809 "aliases": [ 00:14:13.809 "adddd046-f255-4b19-9b1d-7baa42a51ae8" 00:14:13.809 ], 00:14:13.809 "product_name": "Malloc disk", 00:14:13.809 "block_size": 512, 00:14:13.809 "num_blocks": 65536, 00:14:13.809 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:13.809 "assigned_rate_limits": { 00:14:13.809 "rw_ios_per_sec": 0, 00:14:13.809 "rw_mbytes_per_sec": 0, 00:14:13.809 "r_mbytes_per_sec": 0, 00:14:13.809 "w_mbytes_per_sec": 0 00:14:13.809 }, 00:14:13.809 "claimed": true, 00:14:13.809 "claim_type": "exclusive_write", 00:14:13.809 "zoned": false, 00:14:13.809 "supported_io_types": { 00:14:13.809 "read": true, 00:14:13.809 "write": true, 00:14:13.809 "unmap": true, 00:14:13.809 "flush": true, 00:14:13.809 "reset": true, 00:14:13.809 "nvme_admin": false, 00:14:13.809 "nvme_io": false, 00:14:13.809 "nvme_io_md": false, 00:14:13.809 "write_zeroes": true, 00:14:13.809 "zcopy": true, 00:14:13.809 "get_zone_info": false, 00:14:13.809 "zone_management": false, 00:14:13.809 "zone_append": false, 00:14:13.809 "compare": false, 00:14:13.809 "compare_and_write": false, 00:14:13.809 "abort": true, 00:14:13.809 "seek_hole": false, 00:14:13.809 "seek_data": false, 00:14:13.809 "copy": true, 00:14:13.809 "nvme_iov_md": false 00:14:13.809 }, 00:14:13.809 "memory_domains": [ 00:14:13.809 { 00:14:13.809 "dma_device_id": "system", 00:14:13.809 "dma_device_type": 1 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.809 "dma_device_type": 2 00:14:13.809 } 00:14:13.809 ], 00:14:13.809 "driver_specific": {} 00:14:13.809 } 00:14:13.809 ] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.068 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.068 "name": "Existed_Raid", 00:14:14.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.068 "strip_size_kb": 64, 00:14:14.068 "state": "configuring", 00:14:14.068 "raid_level": "raid5f", 00:14:14.068 "superblock": false, 00:14:14.068 "num_base_bdevs": 4, 00:14:14.068 "num_base_bdevs_discovered": 1, 00:14:14.068 "num_base_bdevs_operational": 4, 00:14:14.068 "base_bdevs_list": [ 00:14:14.068 { 00:14:14.068 "name": "BaseBdev1", 00:14:14.068 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:14.068 "is_configured": true, 00:14:14.068 "data_offset": 0, 00:14:14.068 "data_size": 65536 00:14:14.068 }, 00:14:14.068 { 00:14:14.068 "name": "BaseBdev2", 00:14:14.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.068 "is_configured": false, 00:14:14.068 "data_offset": 0, 00:14:14.068 "data_size": 0 00:14:14.068 }, 00:14:14.068 { 00:14:14.068 "name": "BaseBdev3", 00:14:14.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.068 "is_configured": false, 00:14:14.068 "data_offset": 0, 00:14:14.068 "data_size": 0 00:14:14.068 }, 00:14:14.068 { 00:14:14.068 "name": "BaseBdev4", 00:14:14.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.068 "is_configured": false, 00:14:14.068 "data_offset": 0, 00:14:14.068 "data_size": 0 00:14:14.068 } 00:14:14.068 ] 00:14:14.068 }' 00:14:14.068 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.068 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 [2024-11-21 05:00:30.973199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.329 [2024-11-21 05:00:30.973298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 [2024-11-21 05:00:30.981206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.329 [2024-11-21 05:00:30.983091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.329 [2024-11-21 05:00:30.983192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.329 [2024-11-21 05:00:30.983205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.329 [2024-11-21 05:00:30.983214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.329 [2024-11-21 05:00:30.983220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.329 [2024-11-21 05:00:30.983228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.329 05:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.329 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.329 "name": "Existed_Raid", 00:14:14.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.329 "strip_size_kb": 64, 00:14:14.329 "state": "configuring", 00:14:14.329 "raid_level": "raid5f", 00:14:14.329 "superblock": false, 00:14:14.329 "num_base_bdevs": 4, 00:14:14.329 "num_base_bdevs_discovered": 1, 00:14:14.329 "num_base_bdevs_operational": 4, 00:14:14.329 "base_bdevs_list": [ 00:14:14.329 { 00:14:14.329 "name": "BaseBdev1", 00:14:14.329 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:14.329 "is_configured": true, 00:14:14.329 "data_offset": 0, 00:14:14.329 "data_size": 65536 00:14:14.329 }, 00:14:14.329 { 00:14:14.329 "name": "BaseBdev2", 00:14:14.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.329 "is_configured": false, 00:14:14.329 "data_offset": 0, 00:14:14.329 "data_size": 0 00:14:14.329 }, 00:14:14.329 { 00:14:14.329 "name": "BaseBdev3", 00:14:14.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.329 "is_configured": false, 00:14:14.329 "data_offset": 0, 00:14:14.329 "data_size": 0 00:14:14.329 }, 00:14:14.329 { 00:14:14.329 "name": "BaseBdev4", 00:14:14.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.329 "is_configured": false, 00:14:14.329 "data_offset": 0, 00:14:14.329 "data_size": 0 00:14:14.329 } 00:14:14.329 ] 00:14:14.329 }' 00:14:14.329 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.329 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 [2024-11-21 05:00:31.413278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.900 BaseBdev2 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 [ 00:14:14.900 { 00:14:14.900 "name": "BaseBdev2", 00:14:14.900 "aliases": [ 00:14:14.900 "ed5ff545-9a97-41c1-9221-e090be7aae94" 00:14:14.900 ], 00:14:14.900 "product_name": "Malloc disk", 00:14:14.900 "block_size": 512, 00:14:14.900 "num_blocks": 65536, 00:14:14.900 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:14.900 "assigned_rate_limits": { 00:14:14.900 "rw_ios_per_sec": 0, 00:14:14.900 "rw_mbytes_per_sec": 0, 00:14:14.900 "r_mbytes_per_sec": 0, 00:14:14.900 "w_mbytes_per_sec": 0 00:14:14.900 }, 00:14:14.900 "claimed": true, 00:14:14.900 "claim_type": "exclusive_write", 00:14:14.900 "zoned": false, 00:14:14.900 "supported_io_types": { 00:14:14.900 "read": true, 00:14:14.900 "write": true, 00:14:14.900 "unmap": true, 00:14:14.900 "flush": true, 00:14:14.900 "reset": true, 00:14:14.900 "nvme_admin": false, 00:14:14.900 "nvme_io": false, 00:14:14.900 "nvme_io_md": false, 00:14:14.900 "write_zeroes": true, 00:14:14.900 "zcopy": true, 00:14:14.900 "get_zone_info": false, 00:14:14.900 "zone_management": false, 00:14:14.900 "zone_append": false, 00:14:14.900 "compare": false, 00:14:14.900 "compare_and_write": false, 00:14:14.900 "abort": true, 00:14:14.900 "seek_hole": false, 00:14:14.900 "seek_data": false, 00:14:14.900 "copy": true, 00:14:14.900 "nvme_iov_md": false 00:14:14.900 }, 00:14:14.900 "memory_domains": [ 00:14:14.900 { 00:14:14.900 "dma_device_id": "system", 00:14:14.900 "dma_device_type": 1 00:14:14.900 }, 00:14:14.900 { 00:14:14.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.900 "dma_device_type": 2 00:14:14.900 } 00:14:14.900 ], 00:14:14.900 "driver_specific": {} 00:14:14.900 } 00:14:14.900 ] 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.900 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.901 "name": "Existed_Raid", 00:14:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.901 "strip_size_kb": 64, 00:14:14.901 "state": "configuring", 00:14:14.901 "raid_level": "raid5f", 00:14:14.901 "superblock": false, 00:14:14.901 "num_base_bdevs": 4, 00:14:14.901 "num_base_bdevs_discovered": 2, 00:14:14.901 "num_base_bdevs_operational": 4, 00:14:14.901 "base_bdevs_list": [ 00:14:14.901 { 00:14:14.901 "name": "BaseBdev1", 00:14:14.901 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:14.901 "is_configured": true, 00:14:14.901 "data_offset": 0, 00:14:14.901 "data_size": 65536 00:14:14.901 }, 00:14:14.901 { 00:14:14.901 "name": "BaseBdev2", 00:14:14.901 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:14.901 "is_configured": true, 00:14:14.901 "data_offset": 0, 00:14:14.901 "data_size": 65536 00:14:14.901 }, 00:14:14.901 { 00:14:14.901 "name": "BaseBdev3", 00:14:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.901 "is_configured": false, 00:14:14.901 "data_offset": 0, 00:14:14.901 "data_size": 0 00:14:14.901 }, 00:14:14.901 { 00:14:14.901 "name": "BaseBdev4", 00:14:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.901 "is_configured": false, 00:14:14.901 "data_offset": 0, 00:14:14.901 "data_size": 0 00:14:14.901 } 00:14:14.901 ] 00:14:14.901 }' 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.901 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 [2024-11-21 05:00:31.888301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.161 BaseBdev3 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.161 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.422 [ 00:14:15.422 { 00:14:15.422 "name": "BaseBdev3", 00:14:15.422 "aliases": [ 00:14:15.422 "b35ac8c7-c13b-4009-9f9c-5ec3370330f0" 00:14:15.422 ], 00:14:15.422 "product_name": "Malloc disk", 00:14:15.422 "block_size": 512, 00:14:15.422 "num_blocks": 65536, 00:14:15.422 "uuid": "b35ac8c7-c13b-4009-9f9c-5ec3370330f0", 00:14:15.422 "assigned_rate_limits": { 00:14:15.422 "rw_ios_per_sec": 0, 00:14:15.422 "rw_mbytes_per_sec": 0, 00:14:15.422 "r_mbytes_per_sec": 0, 00:14:15.422 "w_mbytes_per_sec": 0 00:14:15.422 }, 00:14:15.422 "claimed": true, 00:14:15.422 "claim_type": "exclusive_write", 00:14:15.422 "zoned": false, 00:14:15.422 "supported_io_types": { 00:14:15.422 "read": true, 00:14:15.422 "write": true, 00:14:15.422 "unmap": true, 00:14:15.422 "flush": true, 00:14:15.422 "reset": true, 00:14:15.422 "nvme_admin": false, 00:14:15.422 "nvme_io": false, 00:14:15.422 "nvme_io_md": false, 00:14:15.422 "write_zeroes": true, 00:14:15.422 "zcopy": true, 00:14:15.422 "get_zone_info": false, 00:14:15.422 "zone_management": false, 00:14:15.422 "zone_append": false, 00:14:15.422 "compare": false, 00:14:15.422 "compare_and_write": false, 00:14:15.422 "abort": true, 00:14:15.422 "seek_hole": false, 00:14:15.422 "seek_data": false, 00:14:15.422 "copy": true, 00:14:15.422 "nvme_iov_md": false 00:14:15.422 }, 00:14:15.422 "memory_domains": [ 00:14:15.422 { 00:14:15.422 "dma_device_id": "system", 00:14:15.422 "dma_device_type": 1 00:14:15.422 }, 00:14:15.422 { 00:14:15.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.422 "dma_device_type": 2 00:14:15.422 } 00:14:15.422 ], 00:14:15.422 "driver_specific": {} 00:14:15.422 } 00:14:15.422 ] 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.422 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.423 "name": "Existed_Raid", 00:14:15.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.423 "strip_size_kb": 64, 00:14:15.423 "state": "configuring", 00:14:15.423 "raid_level": "raid5f", 00:14:15.423 "superblock": false, 00:14:15.423 "num_base_bdevs": 4, 00:14:15.423 "num_base_bdevs_discovered": 3, 00:14:15.423 "num_base_bdevs_operational": 4, 00:14:15.423 "base_bdevs_list": [ 00:14:15.423 { 00:14:15.423 "name": "BaseBdev1", 00:14:15.423 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:15.423 "is_configured": true, 00:14:15.423 "data_offset": 0, 00:14:15.423 "data_size": 65536 00:14:15.423 }, 00:14:15.423 { 00:14:15.423 "name": "BaseBdev2", 00:14:15.423 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:15.423 "is_configured": true, 00:14:15.423 "data_offset": 0, 00:14:15.423 "data_size": 65536 00:14:15.423 }, 00:14:15.423 { 00:14:15.423 "name": "BaseBdev3", 00:14:15.423 "uuid": "b35ac8c7-c13b-4009-9f9c-5ec3370330f0", 00:14:15.423 "is_configured": true, 00:14:15.423 "data_offset": 0, 00:14:15.423 "data_size": 65536 00:14:15.423 }, 00:14:15.423 { 00:14:15.423 "name": "BaseBdev4", 00:14:15.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.423 "is_configured": false, 00:14:15.423 "data_offset": 0, 00:14:15.423 "data_size": 0 00:14:15.423 } 00:14:15.423 ] 00:14:15.423 }' 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.423 05:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:15.683 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.683 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.943 [2024-11-21 05:00:32.416434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.943 [2024-11-21 05:00:32.416598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:15.943 [2024-11-21 05:00:32.416666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:15.943 [2024-11-21 05:00:32.417039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:15.943 [2024-11-21 05:00:32.417630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:15.943 [2024-11-21 05:00:32.417649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:15.943 [2024-11-21 05:00:32.417917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.943 BaseBdev4 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.943 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.944 [ 00:14:15.944 { 00:14:15.944 "name": "BaseBdev4", 00:14:15.944 "aliases": [ 00:14:15.944 "a3836247-0867-4742-baba-75877b27784e" 00:14:15.944 ], 00:14:15.944 "product_name": "Malloc disk", 00:14:15.944 "block_size": 512, 00:14:15.944 "num_blocks": 65536, 00:14:15.944 "uuid": "a3836247-0867-4742-baba-75877b27784e", 00:14:15.944 "assigned_rate_limits": { 00:14:15.944 "rw_ios_per_sec": 0, 00:14:15.944 "rw_mbytes_per_sec": 0, 00:14:15.944 "r_mbytes_per_sec": 0, 00:14:15.944 "w_mbytes_per_sec": 0 00:14:15.944 }, 00:14:15.944 "claimed": true, 00:14:15.944 "claim_type": "exclusive_write", 00:14:15.944 "zoned": false, 00:14:15.944 "supported_io_types": { 00:14:15.944 "read": true, 00:14:15.944 "write": true, 00:14:15.944 "unmap": true, 00:14:15.944 "flush": true, 00:14:15.944 "reset": true, 00:14:15.944 "nvme_admin": false, 00:14:15.944 "nvme_io": false, 00:14:15.944 "nvme_io_md": false, 00:14:15.944 "write_zeroes": true, 00:14:15.944 "zcopy": true, 00:14:15.944 "get_zone_info": false, 00:14:15.944 "zone_management": false, 00:14:15.944 "zone_append": false, 00:14:15.944 "compare": false, 00:14:15.944 "compare_and_write": false, 00:14:15.944 "abort": true, 00:14:15.944 "seek_hole": false, 00:14:15.944 "seek_data": false, 00:14:15.944 "copy": true, 00:14:15.944 "nvme_iov_md": false 00:14:15.944 }, 00:14:15.944 "memory_domains": [ 00:14:15.944 { 00:14:15.944 "dma_device_id": "system", 00:14:15.944 "dma_device_type": 1 00:14:15.944 }, 00:14:15.944 { 00:14:15.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.944 "dma_device_type": 2 00:14:15.944 } 00:14:15.944 ], 00:14:15.944 "driver_specific": {} 00:14:15.944 } 00:14:15.944 ] 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.944 "name": "Existed_Raid", 00:14:15.944 "uuid": "0aaa39f8-4765-41ed-ad99-b9ba91ec198e", 00:14:15.944 "strip_size_kb": 64, 00:14:15.944 "state": "online", 00:14:15.944 "raid_level": "raid5f", 00:14:15.944 "superblock": false, 00:14:15.944 "num_base_bdevs": 4, 00:14:15.944 "num_base_bdevs_discovered": 4, 00:14:15.944 "num_base_bdevs_operational": 4, 00:14:15.944 "base_bdevs_list": [ 00:14:15.944 { 00:14:15.944 "name": "BaseBdev1", 00:14:15.944 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:15.944 "is_configured": true, 00:14:15.944 "data_offset": 0, 00:14:15.944 "data_size": 65536 00:14:15.944 }, 00:14:15.944 { 00:14:15.944 "name": "BaseBdev2", 00:14:15.944 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:15.944 "is_configured": true, 00:14:15.944 "data_offset": 0, 00:14:15.944 "data_size": 65536 00:14:15.944 }, 00:14:15.944 { 00:14:15.944 "name": "BaseBdev3", 00:14:15.944 "uuid": "b35ac8c7-c13b-4009-9f9c-5ec3370330f0", 00:14:15.944 "is_configured": true, 00:14:15.944 "data_offset": 0, 00:14:15.944 "data_size": 65536 00:14:15.944 }, 00:14:15.944 { 00:14:15.944 "name": "BaseBdev4", 00:14:15.944 "uuid": "a3836247-0867-4742-baba-75877b27784e", 00:14:15.944 "is_configured": true, 00:14:15.944 "data_offset": 0, 00:14:15.944 "data_size": 65536 00:14:15.944 } 00:14:15.944 ] 00:14:15.944 }' 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.944 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.204 [2024-11-21 05:00:32.896499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.204 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.204 "name": "Existed_Raid", 00:14:16.204 "aliases": [ 00:14:16.204 "0aaa39f8-4765-41ed-ad99-b9ba91ec198e" 00:14:16.204 ], 00:14:16.204 "product_name": "Raid Volume", 00:14:16.204 "block_size": 512, 00:14:16.204 "num_blocks": 196608, 00:14:16.204 "uuid": "0aaa39f8-4765-41ed-ad99-b9ba91ec198e", 00:14:16.204 "assigned_rate_limits": { 00:14:16.204 "rw_ios_per_sec": 0, 00:14:16.204 "rw_mbytes_per_sec": 0, 00:14:16.204 "r_mbytes_per_sec": 0, 00:14:16.204 "w_mbytes_per_sec": 0 00:14:16.204 }, 00:14:16.204 "claimed": false, 00:14:16.204 "zoned": false, 00:14:16.204 "supported_io_types": { 00:14:16.204 "read": true, 00:14:16.204 "write": true, 00:14:16.204 "unmap": false, 00:14:16.204 "flush": false, 00:14:16.204 "reset": true, 00:14:16.204 "nvme_admin": false, 00:14:16.204 "nvme_io": false, 00:14:16.204 "nvme_io_md": false, 00:14:16.204 "write_zeroes": true, 00:14:16.204 "zcopy": false, 00:14:16.204 "get_zone_info": false, 00:14:16.204 "zone_management": false, 00:14:16.204 "zone_append": false, 00:14:16.204 "compare": false, 00:14:16.204 "compare_and_write": false, 00:14:16.204 "abort": false, 00:14:16.204 "seek_hole": false, 00:14:16.204 "seek_data": false, 00:14:16.204 "copy": false, 00:14:16.204 "nvme_iov_md": false 00:14:16.204 }, 00:14:16.204 "driver_specific": { 00:14:16.204 "raid": { 00:14:16.204 "uuid": "0aaa39f8-4765-41ed-ad99-b9ba91ec198e", 00:14:16.204 "strip_size_kb": 64, 00:14:16.204 "state": "online", 00:14:16.204 "raid_level": "raid5f", 00:14:16.204 "superblock": false, 00:14:16.204 "num_base_bdevs": 4, 00:14:16.204 "num_base_bdevs_discovered": 4, 00:14:16.204 "num_base_bdevs_operational": 4, 00:14:16.204 "base_bdevs_list": [ 00:14:16.204 { 00:14:16.204 "name": "BaseBdev1", 00:14:16.204 "uuid": "adddd046-f255-4b19-9b1d-7baa42a51ae8", 00:14:16.204 "is_configured": true, 00:14:16.204 "data_offset": 0, 00:14:16.204 "data_size": 65536 00:14:16.204 }, 00:14:16.204 { 00:14:16.204 "name": "BaseBdev2", 00:14:16.204 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:16.204 "is_configured": true, 00:14:16.204 "data_offset": 0, 00:14:16.204 "data_size": 65536 00:14:16.204 }, 00:14:16.204 { 00:14:16.204 "name": "BaseBdev3", 00:14:16.204 "uuid": "b35ac8c7-c13b-4009-9f9c-5ec3370330f0", 00:14:16.204 "is_configured": true, 00:14:16.204 "data_offset": 0, 00:14:16.204 "data_size": 65536 00:14:16.204 }, 00:14:16.204 { 00:14:16.204 "name": "BaseBdev4", 00:14:16.204 "uuid": "a3836247-0867-4742-baba-75877b27784e", 00:14:16.204 "is_configured": true, 00:14:16.204 "data_offset": 0, 00:14:16.204 "data_size": 65536 00:14:16.204 } 00:14:16.204 ] 00:14:16.204 } 00:14:16.204 } 00:14:16.204 }' 00:14:16.465 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.465 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:16.465 BaseBdev2 00:14:16.465 BaseBdev3 00:14:16.465 BaseBdev4' 00:14:16.465 05:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.465 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.726 [2024-11-21 05:00:33.211730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.726 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.726 "name": "Existed_Raid", 00:14:16.726 "uuid": "0aaa39f8-4765-41ed-ad99-b9ba91ec198e", 00:14:16.726 "strip_size_kb": 64, 00:14:16.726 "state": "online", 00:14:16.726 "raid_level": "raid5f", 00:14:16.726 "superblock": false, 00:14:16.726 "num_base_bdevs": 4, 00:14:16.726 "num_base_bdevs_discovered": 3, 00:14:16.726 "num_base_bdevs_operational": 3, 00:14:16.726 "base_bdevs_list": [ 00:14:16.726 { 00:14:16.726 "name": null, 00:14:16.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.726 "is_configured": false, 00:14:16.726 "data_offset": 0, 00:14:16.726 "data_size": 65536 00:14:16.726 }, 00:14:16.726 { 00:14:16.726 "name": "BaseBdev2", 00:14:16.726 "uuid": "ed5ff545-9a97-41c1-9221-e090be7aae94", 00:14:16.726 "is_configured": true, 00:14:16.726 "data_offset": 0, 00:14:16.726 "data_size": 65536 00:14:16.726 }, 00:14:16.726 { 00:14:16.726 "name": "BaseBdev3", 00:14:16.726 "uuid": "b35ac8c7-c13b-4009-9f9c-5ec3370330f0", 00:14:16.726 "is_configured": true, 00:14:16.726 "data_offset": 0, 00:14:16.727 "data_size": 65536 00:14:16.727 }, 00:14:16.727 { 00:14:16.727 "name": "BaseBdev4", 00:14:16.727 "uuid": "a3836247-0867-4742-baba-75877b27784e", 00:14:16.727 "is_configured": true, 00:14:16.727 "data_offset": 0, 00:14:16.727 "data_size": 65536 00:14:16.727 } 00:14:16.727 ] 00:14:16.727 }' 00:14:16.727 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.727 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:16.986 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 [2024-11-21 05:00:33.756104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.247 [2024-11-21 05:00:33.756335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.247 [2024-11-21 05:00:33.777066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 [2024-11-21 05:00:33.836966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 [2024-11-21 05:00:33.917517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:17.247 [2024-11-21 05:00:33.917655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:17.247 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.507 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 BaseBdev2 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 [ 00:14:17.508 { 00:14:17.508 "name": "BaseBdev2", 00:14:17.508 "aliases": [ 00:14:17.508 "1d3ed2d5-65db-47b5-a333-c5dbd61162fd" 00:14:17.508 ], 00:14:17.508 "product_name": "Malloc disk", 00:14:17.508 "block_size": 512, 00:14:17.508 "num_blocks": 65536, 00:14:17.508 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:17.508 "assigned_rate_limits": { 00:14:17.508 "rw_ios_per_sec": 0, 00:14:17.508 "rw_mbytes_per_sec": 0, 00:14:17.508 "r_mbytes_per_sec": 0, 00:14:17.508 "w_mbytes_per_sec": 0 00:14:17.508 }, 00:14:17.508 "claimed": false, 00:14:17.508 "zoned": false, 00:14:17.508 "supported_io_types": { 00:14:17.508 "read": true, 00:14:17.508 "write": true, 00:14:17.508 "unmap": true, 00:14:17.508 "flush": true, 00:14:17.508 "reset": true, 00:14:17.508 "nvme_admin": false, 00:14:17.508 "nvme_io": false, 00:14:17.508 "nvme_io_md": false, 00:14:17.508 "write_zeroes": true, 00:14:17.508 "zcopy": true, 00:14:17.508 "get_zone_info": false, 00:14:17.508 "zone_management": false, 00:14:17.508 "zone_append": false, 00:14:17.508 "compare": false, 00:14:17.508 "compare_and_write": false, 00:14:17.508 "abort": true, 00:14:17.508 "seek_hole": false, 00:14:17.508 "seek_data": false, 00:14:17.508 "copy": true, 00:14:17.508 "nvme_iov_md": false 00:14:17.508 }, 00:14:17.508 "memory_domains": [ 00:14:17.508 { 00:14:17.508 "dma_device_id": "system", 00:14:17.508 "dma_device_type": 1 00:14:17.508 }, 00:14:17.508 { 00:14:17.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.508 "dma_device_type": 2 00:14:17.508 } 00:14:17.508 ], 00:14:17.508 "driver_specific": {} 00:14:17.508 } 00:14:17.508 ] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 BaseBdev3 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.508 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 [ 00:14:17.508 { 00:14:17.508 "name": "BaseBdev3", 00:14:17.508 "aliases": [ 00:14:17.508 "a9a48b25-6b0d-4934-9723-d238ae6af182" 00:14:17.508 ], 00:14:17.508 "product_name": "Malloc disk", 00:14:17.508 "block_size": 512, 00:14:17.508 "num_blocks": 65536, 00:14:17.508 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:17.508 "assigned_rate_limits": { 00:14:17.508 "rw_ios_per_sec": 0, 00:14:17.508 "rw_mbytes_per_sec": 0, 00:14:17.508 "r_mbytes_per_sec": 0, 00:14:17.508 "w_mbytes_per_sec": 0 00:14:17.508 }, 00:14:17.508 "claimed": false, 00:14:17.508 "zoned": false, 00:14:17.508 "supported_io_types": { 00:14:17.508 "read": true, 00:14:17.508 "write": true, 00:14:17.508 "unmap": true, 00:14:17.508 "flush": true, 00:14:17.508 "reset": true, 00:14:17.508 "nvme_admin": false, 00:14:17.508 "nvme_io": false, 00:14:17.508 "nvme_io_md": false, 00:14:17.508 "write_zeroes": true, 00:14:17.508 "zcopy": true, 00:14:17.508 "get_zone_info": false, 00:14:17.508 "zone_management": false, 00:14:17.508 "zone_append": false, 00:14:17.508 "compare": false, 00:14:17.508 "compare_and_write": false, 00:14:17.508 "abort": true, 00:14:17.508 "seek_hole": false, 00:14:17.508 "seek_data": false, 00:14:17.508 "copy": true, 00:14:17.508 "nvme_iov_md": false 00:14:17.508 }, 00:14:17.508 "memory_domains": [ 00:14:17.508 { 00:14:17.508 "dma_device_id": "system", 00:14:17.508 "dma_device_type": 1 00:14:17.508 }, 00:14:17.508 { 00:14:17.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.508 "dma_device_type": 2 00:14:17.508 } 00:14:17.509 ], 00:14:17.509 "driver_specific": {} 00:14:17.509 } 00:14:17.509 ] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.509 BaseBdev4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.509 [ 00:14:17.509 { 00:14:17.509 "name": "BaseBdev4", 00:14:17.509 "aliases": [ 00:14:17.509 "43965354-8190-4458-a8ee-1fb16ba7dd35" 00:14:17.509 ], 00:14:17.509 "product_name": "Malloc disk", 00:14:17.509 "block_size": 512, 00:14:17.509 "num_blocks": 65536, 00:14:17.509 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:17.509 "assigned_rate_limits": { 00:14:17.509 "rw_ios_per_sec": 0, 00:14:17.509 "rw_mbytes_per_sec": 0, 00:14:17.509 "r_mbytes_per_sec": 0, 00:14:17.509 "w_mbytes_per_sec": 0 00:14:17.509 }, 00:14:17.509 "claimed": false, 00:14:17.509 "zoned": false, 00:14:17.509 "supported_io_types": { 00:14:17.509 "read": true, 00:14:17.509 "write": true, 00:14:17.509 "unmap": true, 00:14:17.509 "flush": true, 00:14:17.509 "reset": true, 00:14:17.509 "nvme_admin": false, 00:14:17.509 "nvme_io": false, 00:14:17.509 "nvme_io_md": false, 00:14:17.509 "write_zeroes": true, 00:14:17.509 "zcopy": true, 00:14:17.509 "get_zone_info": false, 00:14:17.509 "zone_management": false, 00:14:17.509 "zone_append": false, 00:14:17.509 "compare": false, 00:14:17.509 "compare_and_write": false, 00:14:17.509 "abort": true, 00:14:17.509 "seek_hole": false, 00:14:17.509 "seek_data": false, 00:14:17.509 "copy": true, 00:14:17.509 "nvme_iov_md": false 00:14:17.509 }, 00:14:17.509 "memory_domains": [ 00:14:17.509 { 00:14:17.509 "dma_device_id": "system", 00:14:17.509 "dma_device_type": 1 00:14:17.509 }, 00:14:17.509 { 00:14:17.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.509 "dma_device_type": 2 00:14:17.509 } 00:14:17.509 ], 00:14:17.509 "driver_specific": {} 00:14:17.509 } 00:14:17.509 ] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.509 [2024-11-21 05:00:34.174420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.509 [2024-11-21 05:00:34.174580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.509 [2024-11-21 05:00:34.174611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.509 [2024-11-21 05:00:34.176809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.509 [2024-11-21 05:00:34.176869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.509 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.509 "name": "Existed_Raid", 00:14:17.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.509 "strip_size_kb": 64, 00:14:17.509 "state": "configuring", 00:14:17.509 "raid_level": "raid5f", 00:14:17.509 "superblock": false, 00:14:17.509 "num_base_bdevs": 4, 00:14:17.509 "num_base_bdevs_discovered": 3, 00:14:17.509 "num_base_bdevs_operational": 4, 00:14:17.509 "base_bdevs_list": [ 00:14:17.509 { 00:14:17.509 "name": "BaseBdev1", 00:14:17.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.509 "is_configured": false, 00:14:17.509 "data_offset": 0, 00:14:17.509 "data_size": 0 00:14:17.509 }, 00:14:17.509 { 00:14:17.509 "name": "BaseBdev2", 00:14:17.509 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:17.509 "is_configured": true, 00:14:17.509 "data_offset": 0, 00:14:17.509 "data_size": 65536 00:14:17.509 }, 00:14:17.509 { 00:14:17.509 "name": "BaseBdev3", 00:14:17.509 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:17.509 "is_configured": true, 00:14:17.509 "data_offset": 0, 00:14:17.510 "data_size": 65536 00:14:17.510 }, 00:14:17.510 { 00:14:17.510 "name": "BaseBdev4", 00:14:17.510 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:17.510 "is_configured": true, 00:14:17.510 "data_offset": 0, 00:14:17.510 "data_size": 65536 00:14:17.510 } 00:14:17.510 ] 00:14:17.510 }' 00:14:17.510 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.510 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.080 [2024-11-21 05:00:34.649773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.080 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.080 "name": "Existed_Raid", 00:14:18.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.080 "strip_size_kb": 64, 00:14:18.080 "state": "configuring", 00:14:18.080 "raid_level": "raid5f", 00:14:18.080 "superblock": false, 00:14:18.080 "num_base_bdevs": 4, 00:14:18.080 "num_base_bdevs_discovered": 2, 00:14:18.080 "num_base_bdevs_operational": 4, 00:14:18.080 "base_bdevs_list": [ 00:14:18.080 { 00:14:18.080 "name": "BaseBdev1", 00:14:18.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.080 "is_configured": false, 00:14:18.080 "data_offset": 0, 00:14:18.080 "data_size": 0 00:14:18.080 }, 00:14:18.080 { 00:14:18.080 "name": null, 00:14:18.080 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:18.081 "is_configured": false, 00:14:18.081 "data_offset": 0, 00:14:18.081 "data_size": 65536 00:14:18.081 }, 00:14:18.081 { 00:14:18.081 "name": "BaseBdev3", 00:14:18.081 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:18.081 "is_configured": true, 00:14:18.081 "data_offset": 0, 00:14:18.081 "data_size": 65536 00:14:18.081 }, 00:14:18.081 { 00:14:18.081 "name": "BaseBdev4", 00:14:18.081 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:18.081 "is_configured": true, 00:14:18.081 "data_offset": 0, 00:14:18.081 "data_size": 65536 00:14:18.081 } 00:14:18.081 ] 00:14:18.081 }' 00:14:18.081 05:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.081 05:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 [2024-11-21 05:00:35.193752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.650 BaseBdev1 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.650 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.651 [ 00:14:18.651 { 00:14:18.651 "name": "BaseBdev1", 00:14:18.651 "aliases": [ 00:14:18.651 "c8611709-811f-45fd-a9f4-17bbef41e3f6" 00:14:18.651 ], 00:14:18.651 "product_name": "Malloc disk", 00:14:18.651 "block_size": 512, 00:14:18.651 "num_blocks": 65536, 00:14:18.651 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:18.651 "assigned_rate_limits": { 00:14:18.651 "rw_ios_per_sec": 0, 00:14:18.651 "rw_mbytes_per_sec": 0, 00:14:18.651 "r_mbytes_per_sec": 0, 00:14:18.651 "w_mbytes_per_sec": 0 00:14:18.651 }, 00:14:18.651 "claimed": true, 00:14:18.651 "claim_type": "exclusive_write", 00:14:18.651 "zoned": false, 00:14:18.651 "supported_io_types": { 00:14:18.651 "read": true, 00:14:18.651 "write": true, 00:14:18.651 "unmap": true, 00:14:18.651 "flush": true, 00:14:18.651 "reset": true, 00:14:18.651 "nvme_admin": false, 00:14:18.651 "nvme_io": false, 00:14:18.651 "nvme_io_md": false, 00:14:18.651 "write_zeroes": true, 00:14:18.651 "zcopy": true, 00:14:18.651 "get_zone_info": false, 00:14:18.651 "zone_management": false, 00:14:18.651 "zone_append": false, 00:14:18.651 "compare": false, 00:14:18.651 "compare_and_write": false, 00:14:18.651 "abort": true, 00:14:18.651 "seek_hole": false, 00:14:18.651 "seek_data": false, 00:14:18.651 "copy": true, 00:14:18.651 "nvme_iov_md": false 00:14:18.651 }, 00:14:18.651 "memory_domains": [ 00:14:18.651 { 00:14:18.651 "dma_device_id": "system", 00:14:18.651 "dma_device_type": 1 00:14:18.651 }, 00:14:18.651 { 00:14:18.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.651 "dma_device_type": 2 00:14:18.651 } 00:14:18.651 ], 00:14:18.651 "driver_specific": {} 00:14:18.651 } 00:14:18.651 ] 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.651 "name": "Existed_Raid", 00:14:18.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.651 "strip_size_kb": 64, 00:14:18.651 "state": "configuring", 00:14:18.651 "raid_level": "raid5f", 00:14:18.651 "superblock": false, 00:14:18.651 "num_base_bdevs": 4, 00:14:18.651 "num_base_bdevs_discovered": 3, 00:14:18.651 "num_base_bdevs_operational": 4, 00:14:18.651 "base_bdevs_list": [ 00:14:18.651 { 00:14:18.651 "name": "BaseBdev1", 00:14:18.651 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:18.651 "is_configured": true, 00:14:18.651 "data_offset": 0, 00:14:18.651 "data_size": 65536 00:14:18.651 }, 00:14:18.651 { 00:14:18.651 "name": null, 00:14:18.651 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:18.651 "is_configured": false, 00:14:18.651 "data_offset": 0, 00:14:18.651 "data_size": 65536 00:14:18.651 }, 00:14:18.651 { 00:14:18.651 "name": "BaseBdev3", 00:14:18.651 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:18.651 "is_configured": true, 00:14:18.651 "data_offset": 0, 00:14:18.651 "data_size": 65536 00:14:18.651 }, 00:14:18.651 { 00:14:18.651 "name": "BaseBdev4", 00:14:18.651 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:18.651 "is_configured": true, 00:14:18.651 "data_offset": 0, 00:14:18.651 "data_size": 65536 00:14:18.651 } 00:14:18.651 ] 00:14:18.651 }' 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.651 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 [2024-11-21 05:00:35.724931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.220 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.220 "name": "Existed_Raid", 00:14:19.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.220 "strip_size_kb": 64, 00:14:19.220 "state": "configuring", 00:14:19.220 "raid_level": "raid5f", 00:14:19.220 "superblock": false, 00:14:19.220 "num_base_bdevs": 4, 00:14:19.220 "num_base_bdevs_discovered": 2, 00:14:19.221 "num_base_bdevs_operational": 4, 00:14:19.221 "base_bdevs_list": [ 00:14:19.221 { 00:14:19.221 "name": "BaseBdev1", 00:14:19.221 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:19.221 "is_configured": true, 00:14:19.221 "data_offset": 0, 00:14:19.221 "data_size": 65536 00:14:19.221 }, 00:14:19.221 { 00:14:19.221 "name": null, 00:14:19.221 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:19.221 "is_configured": false, 00:14:19.221 "data_offset": 0, 00:14:19.221 "data_size": 65536 00:14:19.221 }, 00:14:19.221 { 00:14:19.221 "name": null, 00:14:19.221 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:19.221 "is_configured": false, 00:14:19.221 "data_offset": 0, 00:14:19.221 "data_size": 65536 00:14:19.221 }, 00:14:19.221 { 00:14:19.221 "name": "BaseBdev4", 00:14:19.221 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:19.221 "is_configured": true, 00:14:19.221 "data_offset": 0, 00:14:19.221 "data_size": 65536 00:14:19.221 } 00:14:19.221 ] 00:14:19.221 }' 00:14:19.221 05:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.221 05:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.480 [2024-11-21 05:00:36.200170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.480 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.740 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.740 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.740 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.740 "name": "Existed_Raid", 00:14:19.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.740 "strip_size_kb": 64, 00:14:19.740 "state": "configuring", 00:14:19.740 "raid_level": "raid5f", 00:14:19.740 "superblock": false, 00:14:19.740 "num_base_bdevs": 4, 00:14:19.740 "num_base_bdevs_discovered": 3, 00:14:19.740 "num_base_bdevs_operational": 4, 00:14:19.740 "base_bdevs_list": [ 00:14:19.740 { 00:14:19.740 "name": "BaseBdev1", 00:14:19.740 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:19.740 "is_configured": true, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 65536 00:14:19.740 }, 00:14:19.740 { 00:14:19.740 "name": null, 00:14:19.740 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:19.740 "is_configured": false, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 65536 00:14:19.740 }, 00:14:19.740 { 00:14:19.740 "name": "BaseBdev3", 00:14:19.740 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:19.740 "is_configured": true, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 65536 00:14:19.740 }, 00:14:19.740 { 00:14:19.740 "name": "BaseBdev4", 00:14:19.740 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:19.740 "is_configured": true, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 65536 00:14:19.740 } 00:14:19.740 ] 00:14:19.740 }' 00:14:19.740 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.740 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.000 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.000 [2024-11-21 05:00:36.727266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.261 "name": "Existed_Raid", 00:14:20.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.261 "strip_size_kb": 64, 00:14:20.261 "state": "configuring", 00:14:20.261 "raid_level": "raid5f", 00:14:20.261 "superblock": false, 00:14:20.261 "num_base_bdevs": 4, 00:14:20.261 "num_base_bdevs_discovered": 2, 00:14:20.261 "num_base_bdevs_operational": 4, 00:14:20.261 "base_bdevs_list": [ 00:14:20.261 { 00:14:20.261 "name": null, 00:14:20.261 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:20.261 "is_configured": false, 00:14:20.261 "data_offset": 0, 00:14:20.261 "data_size": 65536 00:14:20.261 }, 00:14:20.261 { 00:14:20.261 "name": null, 00:14:20.261 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:20.261 "is_configured": false, 00:14:20.261 "data_offset": 0, 00:14:20.261 "data_size": 65536 00:14:20.261 }, 00:14:20.261 { 00:14:20.261 "name": "BaseBdev3", 00:14:20.261 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:20.261 "is_configured": true, 00:14:20.261 "data_offset": 0, 00:14:20.261 "data_size": 65536 00:14:20.261 }, 00:14:20.261 { 00:14:20.261 "name": "BaseBdev4", 00:14:20.261 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:20.261 "is_configured": true, 00:14:20.261 "data_offset": 0, 00:14:20.261 "data_size": 65536 00:14:20.261 } 00:14:20.261 ] 00:14:20.261 }' 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.261 05:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.521 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.781 [2024-11-21 05:00:37.255528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.781 "name": "Existed_Raid", 00:14:20.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.781 "strip_size_kb": 64, 00:14:20.781 "state": "configuring", 00:14:20.781 "raid_level": "raid5f", 00:14:20.781 "superblock": false, 00:14:20.781 "num_base_bdevs": 4, 00:14:20.781 "num_base_bdevs_discovered": 3, 00:14:20.781 "num_base_bdevs_operational": 4, 00:14:20.781 "base_bdevs_list": [ 00:14:20.781 { 00:14:20.781 "name": null, 00:14:20.781 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:20.781 "is_configured": false, 00:14:20.781 "data_offset": 0, 00:14:20.781 "data_size": 65536 00:14:20.781 }, 00:14:20.781 { 00:14:20.781 "name": "BaseBdev2", 00:14:20.781 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:20.781 "is_configured": true, 00:14:20.781 "data_offset": 0, 00:14:20.781 "data_size": 65536 00:14:20.781 }, 00:14:20.781 { 00:14:20.781 "name": "BaseBdev3", 00:14:20.781 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:20.781 "is_configured": true, 00:14:20.781 "data_offset": 0, 00:14:20.781 "data_size": 65536 00:14:20.781 }, 00:14:20.781 { 00:14:20.781 "name": "BaseBdev4", 00:14:20.781 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:20.781 "is_configured": true, 00:14:20.781 "data_offset": 0, 00:14:20.781 "data_size": 65536 00:14:20.781 } 00:14:20.781 ] 00:14:20.781 }' 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.781 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.041 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c8611709-811f-45fd-a9f4-17bbef41e3f6 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.302 [2024-11-21 05:00:37.791196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:21.302 [2024-11-21 05:00:37.791314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:21.302 [2024-11-21 05:00:37.791342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:21.302 [2024-11-21 05:00:37.791735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:21.302 [2024-11-21 05:00:37.792296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:21.302 [2024-11-21 05:00:37.792360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:21.302 [2024-11-21 05:00:37.792650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.302 NewBaseBdev 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.302 [ 00:14:21.302 { 00:14:21.302 "name": "NewBaseBdev", 00:14:21.302 "aliases": [ 00:14:21.302 "c8611709-811f-45fd-a9f4-17bbef41e3f6" 00:14:21.302 ], 00:14:21.302 "product_name": "Malloc disk", 00:14:21.302 "block_size": 512, 00:14:21.302 "num_blocks": 65536, 00:14:21.302 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:21.302 "assigned_rate_limits": { 00:14:21.302 "rw_ios_per_sec": 0, 00:14:21.302 "rw_mbytes_per_sec": 0, 00:14:21.302 "r_mbytes_per_sec": 0, 00:14:21.302 "w_mbytes_per_sec": 0 00:14:21.302 }, 00:14:21.302 "claimed": true, 00:14:21.302 "claim_type": "exclusive_write", 00:14:21.302 "zoned": false, 00:14:21.302 "supported_io_types": { 00:14:21.302 "read": true, 00:14:21.302 "write": true, 00:14:21.302 "unmap": true, 00:14:21.302 "flush": true, 00:14:21.302 "reset": true, 00:14:21.302 "nvme_admin": false, 00:14:21.302 "nvme_io": false, 00:14:21.302 "nvme_io_md": false, 00:14:21.302 "write_zeroes": true, 00:14:21.302 "zcopy": true, 00:14:21.302 "get_zone_info": false, 00:14:21.302 "zone_management": false, 00:14:21.302 "zone_append": false, 00:14:21.302 "compare": false, 00:14:21.302 "compare_and_write": false, 00:14:21.302 "abort": true, 00:14:21.302 "seek_hole": false, 00:14:21.302 "seek_data": false, 00:14:21.302 "copy": true, 00:14:21.302 "nvme_iov_md": false 00:14:21.302 }, 00:14:21.302 "memory_domains": [ 00:14:21.302 { 00:14:21.302 "dma_device_id": "system", 00:14:21.302 "dma_device_type": 1 00:14:21.302 }, 00:14:21.302 { 00:14:21.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.302 "dma_device_type": 2 00:14:21.302 } 00:14:21.302 ], 00:14:21.302 "driver_specific": {} 00:14:21.302 } 00:14:21.302 ] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.302 "name": "Existed_Raid", 00:14:21.302 "uuid": "5301b591-ac9f-49e2-a43d-f8656da76592", 00:14:21.302 "strip_size_kb": 64, 00:14:21.302 "state": "online", 00:14:21.302 "raid_level": "raid5f", 00:14:21.302 "superblock": false, 00:14:21.302 "num_base_bdevs": 4, 00:14:21.302 "num_base_bdevs_discovered": 4, 00:14:21.302 "num_base_bdevs_operational": 4, 00:14:21.302 "base_bdevs_list": [ 00:14:21.302 { 00:14:21.302 "name": "NewBaseBdev", 00:14:21.302 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:21.302 "is_configured": true, 00:14:21.302 "data_offset": 0, 00:14:21.302 "data_size": 65536 00:14:21.302 }, 00:14:21.302 { 00:14:21.302 "name": "BaseBdev2", 00:14:21.302 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:21.302 "is_configured": true, 00:14:21.302 "data_offset": 0, 00:14:21.302 "data_size": 65536 00:14:21.302 }, 00:14:21.302 { 00:14:21.302 "name": "BaseBdev3", 00:14:21.302 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:21.302 "is_configured": true, 00:14:21.302 "data_offset": 0, 00:14:21.302 "data_size": 65536 00:14:21.302 }, 00:14:21.302 { 00:14:21.302 "name": "BaseBdev4", 00:14:21.302 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:21.302 "is_configured": true, 00:14:21.302 "data_offset": 0, 00:14:21.302 "data_size": 65536 00:14:21.302 } 00:14:21.302 ] 00:14:21.302 }' 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.302 05:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.563 [2024-11-21 05:00:38.230873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.563 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.563 "name": "Existed_Raid", 00:14:21.563 "aliases": [ 00:14:21.563 "5301b591-ac9f-49e2-a43d-f8656da76592" 00:14:21.563 ], 00:14:21.563 "product_name": "Raid Volume", 00:14:21.563 "block_size": 512, 00:14:21.563 "num_blocks": 196608, 00:14:21.563 "uuid": "5301b591-ac9f-49e2-a43d-f8656da76592", 00:14:21.563 "assigned_rate_limits": { 00:14:21.563 "rw_ios_per_sec": 0, 00:14:21.563 "rw_mbytes_per_sec": 0, 00:14:21.563 "r_mbytes_per_sec": 0, 00:14:21.563 "w_mbytes_per_sec": 0 00:14:21.563 }, 00:14:21.563 "claimed": false, 00:14:21.563 "zoned": false, 00:14:21.563 "supported_io_types": { 00:14:21.563 "read": true, 00:14:21.563 "write": true, 00:14:21.563 "unmap": false, 00:14:21.563 "flush": false, 00:14:21.563 "reset": true, 00:14:21.563 "nvme_admin": false, 00:14:21.563 "nvme_io": false, 00:14:21.563 "nvme_io_md": false, 00:14:21.563 "write_zeroes": true, 00:14:21.563 "zcopy": false, 00:14:21.563 "get_zone_info": false, 00:14:21.563 "zone_management": false, 00:14:21.563 "zone_append": false, 00:14:21.563 "compare": false, 00:14:21.563 "compare_and_write": false, 00:14:21.564 "abort": false, 00:14:21.564 "seek_hole": false, 00:14:21.564 "seek_data": false, 00:14:21.564 "copy": false, 00:14:21.564 "nvme_iov_md": false 00:14:21.564 }, 00:14:21.564 "driver_specific": { 00:14:21.564 "raid": { 00:14:21.564 "uuid": "5301b591-ac9f-49e2-a43d-f8656da76592", 00:14:21.564 "strip_size_kb": 64, 00:14:21.564 "state": "online", 00:14:21.564 "raid_level": "raid5f", 00:14:21.564 "superblock": false, 00:14:21.564 "num_base_bdevs": 4, 00:14:21.564 "num_base_bdevs_discovered": 4, 00:14:21.564 "num_base_bdevs_operational": 4, 00:14:21.564 "base_bdevs_list": [ 00:14:21.564 { 00:14:21.564 "name": "NewBaseBdev", 00:14:21.564 "uuid": "c8611709-811f-45fd-a9f4-17bbef41e3f6", 00:14:21.564 "is_configured": true, 00:14:21.564 "data_offset": 0, 00:14:21.564 "data_size": 65536 00:14:21.564 }, 00:14:21.564 { 00:14:21.564 "name": "BaseBdev2", 00:14:21.564 "uuid": "1d3ed2d5-65db-47b5-a333-c5dbd61162fd", 00:14:21.564 "is_configured": true, 00:14:21.564 "data_offset": 0, 00:14:21.564 "data_size": 65536 00:14:21.564 }, 00:14:21.564 { 00:14:21.564 "name": "BaseBdev3", 00:14:21.564 "uuid": "a9a48b25-6b0d-4934-9723-d238ae6af182", 00:14:21.564 "is_configured": true, 00:14:21.564 "data_offset": 0, 00:14:21.564 "data_size": 65536 00:14:21.564 }, 00:14:21.564 { 00:14:21.564 "name": "BaseBdev4", 00:14:21.564 "uuid": "43965354-8190-4458-a8ee-1fb16ba7dd35", 00:14:21.564 "is_configured": true, 00:14:21.564 "data_offset": 0, 00:14:21.564 "data_size": 65536 00:14:21.564 } 00:14:21.564 ] 00:14:21.564 } 00:14:21.564 } 00:14:21.564 }' 00:14:21.564 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:21.824 BaseBdev2 00:14:21.824 BaseBdev3 00:14:21.824 BaseBdev4' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.084 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.084 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.084 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.084 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.084 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 [2024-11-21 05:00:38.582074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.085 [2024-11-21 05:00:38.582180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.085 [2024-11-21 05:00:38.582293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.085 [2024-11-21 05:00:38.582650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.085 [2024-11-21 05:00:38.582673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93374 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 93374 ']' 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 93374 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93374 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.085 killing process with pid 93374 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93374' 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 93374 00:14:22.085 [2024-11-21 05:00:38.631132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.085 05:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 93374 00:14:22.085 [2024-11-21 05:00:38.710851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.345 05:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:22.345 ************************************ 00:14:22.345 END TEST raid5f_state_function_test 00:14:22.345 ************************************ 00:14:22.345 00:14:22.345 real 0m9.907s 00:14:22.345 user 0m16.668s 00:14:22.345 sys 0m2.105s 00:14:22.345 05:00:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.345 05:00:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 05:00:39 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:22.611 05:00:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:22.611 05:00:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.611 05:00:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 ************************************ 00:14:22.611 START TEST raid5f_state_function_test_sb 00:14:22.611 ************************************ 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:22.611 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94029 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94029' 00:14:22.612 Process raid pid: 94029 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94029 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94029 ']' 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.612 05:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.612 [2024-11-21 05:00:39.223930] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:22.612 [2024-11-21 05:00:39.224153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.874 [2024-11-21 05:00:39.398106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.874 [2024-11-21 05:00:39.440541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.874 [2024-11-21 05:00:39.518646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.874 [2024-11-21 05:00:39.518816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.443 [2024-11-21 05:00:40.055914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.443 [2024-11-21 05:00:40.056067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.443 [2024-11-21 05:00:40.056136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.443 [2024-11-21 05:00:40.056183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.443 [2024-11-21 05:00:40.056223] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.443 [2024-11-21 05:00:40.056273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.443 [2024-11-21 05:00:40.056320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:23.443 [2024-11-21 05:00:40.056390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.443 "name": "Existed_Raid", 00:14:23.443 "uuid": "ba1bfbe1-b6f2-4e59-80d2-471062683174", 00:14:23.443 "strip_size_kb": 64, 00:14:23.443 "state": "configuring", 00:14:23.443 "raid_level": "raid5f", 00:14:23.443 "superblock": true, 00:14:23.443 "num_base_bdevs": 4, 00:14:23.443 "num_base_bdevs_discovered": 0, 00:14:23.443 "num_base_bdevs_operational": 4, 00:14:23.443 "base_bdevs_list": [ 00:14:23.443 { 00:14:23.443 "name": "BaseBdev1", 00:14:23.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.443 "is_configured": false, 00:14:23.443 "data_offset": 0, 00:14:23.443 "data_size": 0 00:14:23.443 }, 00:14:23.443 { 00:14:23.443 "name": "BaseBdev2", 00:14:23.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.443 "is_configured": false, 00:14:23.443 "data_offset": 0, 00:14:23.443 "data_size": 0 00:14:23.443 }, 00:14:23.443 { 00:14:23.443 "name": "BaseBdev3", 00:14:23.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.443 "is_configured": false, 00:14:23.443 "data_offset": 0, 00:14:23.443 "data_size": 0 00:14:23.443 }, 00:14:23.443 { 00:14:23.443 "name": "BaseBdev4", 00:14:23.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.443 "is_configured": false, 00:14:23.443 "data_offset": 0, 00:14:23.443 "data_size": 0 00:14:23.443 } 00:14:23.443 ] 00:14:23.443 }' 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.443 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.013 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 [2024-11-21 05:00:40.511113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.014 [2024-11-21 05:00:40.511272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 [2024-11-21 05:00:40.523108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.014 [2024-11-21 05:00:40.523226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.014 [2024-11-21 05:00:40.523261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.014 [2024-11-21 05:00:40.523290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.014 [2024-11-21 05:00:40.523312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.014 [2024-11-21 05:00:40.523375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.014 [2024-11-21 05:00:40.523418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:24.014 [2024-11-21 05:00:40.523465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 [2024-11-21 05:00:40.550667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.014 BaseBdev1 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 [ 00:14:24.014 { 00:14:24.014 "name": "BaseBdev1", 00:14:24.014 "aliases": [ 00:14:24.014 "ed0c6025-0d0d-4487-b989-1ee3fd244f9e" 00:14:24.014 ], 00:14:24.014 "product_name": "Malloc disk", 00:14:24.014 "block_size": 512, 00:14:24.014 "num_blocks": 65536, 00:14:24.014 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:24.014 "assigned_rate_limits": { 00:14:24.014 "rw_ios_per_sec": 0, 00:14:24.014 "rw_mbytes_per_sec": 0, 00:14:24.014 "r_mbytes_per_sec": 0, 00:14:24.014 "w_mbytes_per_sec": 0 00:14:24.014 }, 00:14:24.014 "claimed": true, 00:14:24.014 "claim_type": "exclusive_write", 00:14:24.014 "zoned": false, 00:14:24.014 "supported_io_types": { 00:14:24.014 "read": true, 00:14:24.014 "write": true, 00:14:24.014 "unmap": true, 00:14:24.014 "flush": true, 00:14:24.014 "reset": true, 00:14:24.014 "nvme_admin": false, 00:14:24.014 "nvme_io": false, 00:14:24.014 "nvme_io_md": false, 00:14:24.014 "write_zeroes": true, 00:14:24.014 "zcopy": true, 00:14:24.014 "get_zone_info": false, 00:14:24.014 "zone_management": false, 00:14:24.014 "zone_append": false, 00:14:24.014 "compare": false, 00:14:24.014 "compare_and_write": false, 00:14:24.014 "abort": true, 00:14:24.014 "seek_hole": false, 00:14:24.014 "seek_data": false, 00:14:24.014 "copy": true, 00:14:24.014 "nvme_iov_md": false 00:14:24.014 }, 00:14:24.014 "memory_domains": [ 00:14:24.014 { 00:14:24.014 "dma_device_id": "system", 00:14:24.014 "dma_device_type": 1 00:14:24.014 }, 00:14:24.014 { 00:14:24.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.014 "dma_device_type": 2 00:14:24.014 } 00:14:24.014 ], 00:14:24.014 "driver_specific": {} 00:14:24.014 } 00:14:24.014 ] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.014 "name": "Existed_Raid", 00:14:24.014 "uuid": "db133d5b-9613-4007-9dcf-4f27f92d0037", 00:14:24.014 "strip_size_kb": 64, 00:14:24.014 "state": "configuring", 00:14:24.014 "raid_level": "raid5f", 00:14:24.014 "superblock": true, 00:14:24.014 "num_base_bdevs": 4, 00:14:24.014 "num_base_bdevs_discovered": 1, 00:14:24.014 "num_base_bdevs_operational": 4, 00:14:24.014 "base_bdevs_list": [ 00:14:24.014 { 00:14:24.014 "name": "BaseBdev1", 00:14:24.014 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:24.014 "is_configured": true, 00:14:24.014 "data_offset": 2048, 00:14:24.014 "data_size": 63488 00:14:24.014 }, 00:14:24.014 { 00:14:24.014 "name": "BaseBdev2", 00:14:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.014 "is_configured": false, 00:14:24.014 "data_offset": 0, 00:14:24.014 "data_size": 0 00:14:24.014 }, 00:14:24.014 { 00:14:24.014 "name": "BaseBdev3", 00:14:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.014 "is_configured": false, 00:14:24.014 "data_offset": 0, 00:14:24.014 "data_size": 0 00:14:24.014 }, 00:14:24.014 { 00:14:24.014 "name": "BaseBdev4", 00:14:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.014 "is_configured": false, 00:14:24.014 "data_offset": 0, 00:14:24.014 "data_size": 0 00:14:24.014 } 00:14:24.014 ] 00:14:24.014 }' 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.014 05:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.585 [2024-11-21 05:00:41.038068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.585 [2024-11-21 05:00:41.038155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:24.585 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.586 [2024-11-21 05:00:41.050070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.586 [2024-11-21 05:00:41.052450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.586 [2024-11-21 05:00:41.052542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.586 [2024-11-21 05:00:41.052597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.586 [2024-11-21 05:00:41.052642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.586 [2024-11-21 05:00:41.052683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:24.586 [2024-11-21 05:00:41.052730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.586 "name": "Existed_Raid", 00:14:24.586 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:24.586 "strip_size_kb": 64, 00:14:24.586 "state": "configuring", 00:14:24.586 "raid_level": "raid5f", 00:14:24.586 "superblock": true, 00:14:24.586 "num_base_bdevs": 4, 00:14:24.586 "num_base_bdevs_discovered": 1, 00:14:24.586 "num_base_bdevs_operational": 4, 00:14:24.586 "base_bdevs_list": [ 00:14:24.586 { 00:14:24.586 "name": "BaseBdev1", 00:14:24.586 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:24.586 "is_configured": true, 00:14:24.586 "data_offset": 2048, 00:14:24.586 "data_size": 63488 00:14:24.586 }, 00:14:24.586 { 00:14:24.586 "name": "BaseBdev2", 00:14:24.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.586 "is_configured": false, 00:14:24.586 "data_offset": 0, 00:14:24.586 "data_size": 0 00:14:24.586 }, 00:14:24.586 { 00:14:24.586 "name": "BaseBdev3", 00:14:24.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.586 "is_configured": false, 00:14:24.586 "data_offset": 0, 00:14:24.586 "data_size": 0 00:14:24.586 }, 00:14:24.586 { 00:14:24.586 "name": "BaseBdev4", 00:14:24.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.586 "is_configured": false, 00:14:24.586 "data_offset": 0, 00:14:24.586 "data_size": 0 00:14:24.586 } 00:14:24.586 ] 00:14:24.586 }' 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.586 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 [2024-11-21 05:00:41.510223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.847 BaseBdev2 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 [ 00:14:24.847 { 00:14:24.847 "name": "BaseBdev2", 00:14:24.847 "aliases": [ 00:14:24.847 "849f3a58-0844-49e4-bd7e-24ed32524dcf" 00:14:24.847 ], 00:14:24.847 "product_name": "Malloc disk", 00:14:24.847 "block_size": 512, 00:14:24.847 "num_blocks": 65536, 00:14:24.847 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:24.847 "assigned_rate_limits": { 00:14:24.847 "rw_ios_per_sec": 0, 00:14:24.847 "rw_mbytes_per_sec": 0, 00:14:24.847 "r_mbytes_per_sec": 0, 00:14:24.847 "w_mbytes_per_sec": 0 00:14:24.847 }, 00:14:24.847 "claimed": true, 00:14:24.847 "claim_type": "exclusive_write", 00:14:24.847 "zoned": false, 00:14:24.847 "supported_io_types": { 00:14:24.847 "read": true, 00:14:24.847 "write": true, 00:14:24.847 "unmap": true, 00:14:24.847 "flush": true, 00:14:24.847 "reset": true, 00:14:24.847 "nvme_admin": false, 00:14:24.847 "nvme_io": false, 00:14:24.847 "nvme_io_md": false, 00:14:24.847 "write_zeroes": true, 00:14:24.847 "zcopy": true, 00:14:24.847 "get_zone_info": false, 00:14:24.847 "zone_management": false, 00:14:24.847 "zone_append": false, 00:14:24.847 "compare": false, 00:14:24.847 "compare_and_write": false, 00:14:24.847 "abort": true, 00:14:24.847 "seek_hole": false, 00:14:24.847 "seek_data": false, 00:14:24.847 "copy": true, 00:14:24.847 "nvme_iov_md": false 00:14:24.847 }, 00:14:24.847 "memory_domains": [ 00:14:24.847 { 00:14:24.847 "dma_device_id": "system", 00:14:24.847 "dma_device_type": 1 00:14:24.847 }, 00:14:24.847 { 00:14:24.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.847 "dma_device_type": 2 00:14:24.847 } 00:14:24.847 ], 00:14:24.847 "driver_specific": {} 00:14:24.847 } 00:14:24.847 ] 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.847 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.848 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.848 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.848 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.848 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.848 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.107 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.107 "name": "Existed_Raid", 00:14:25.107 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:25.107 "strip_size_kb": 64, 00:14:25.107 "state": "configuring", 00:14:25.107 "raid_level": "raid5f", 00:14:25.108 "superblock": true, 00:14:25.108 "num_base_bdevs": 4, 00:14:25.108 "num_base_bdevs_discovered": 2, 00:14:25.108 "num_base_bdevs_operational": 4, 00:14:25.108 "base_bdevs_list": [ 00:14:25.108 { 00:14:25.108 "name": "BaseBdev1", 00:14:25.108 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:25.108 "is_configured": true, 00:14:25.108 "data_offset": 2048, 00:14:25.108 "data_size": 63488 00:14:25.108 }, 00:14:25.108 { 00:14:25.108 "name": "BaseBdev2", 00:14:25.108 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:25.108 "is_configured": true, 00:14:25.108 "data_offset": 2048, 00:14:25.108 "data_size": 63488 00:14:25.108 }, 00:14:25.108 { 00:14:25.108 "name": "BaseBdev3", 00:14:25.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.108 "is_configured": false, 00:14:25.108 "data_offset": 0, 00:14:25.108 "data_size": 0 00:14:25.108 }, 00:14:25.108 { 00:14:25.108 "name": "BaseBdev4", 00:14:25.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.108 "is_configured": false, 00:14:25.108 "data_offset": 0, 00:14:25.108 "data_size": 0 00:14:25.108 } 00:14:25.108 ] 00:14:25.108 }' 00:14:25.108 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.108 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.368 [2024-11-21 05:00:41.952850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.368 BaseBdev3 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.368 [ 00:14:25.368 { 00:14:25.368 "name": "BaseBdev3", 00:14:25.368 "aliases": [ 00:14:25.368 "97178239-b2a0-490f-85d9-98bf9be1c76b" 00:14:25.368 ], 00:14:25.368 "product_name": "Malloc disk", 00:14:25.368 "block_size": 512, 00:14:25.368 "num_blocks": 65536, 00:14:25.368 "uuid": "97178239-b2a0-490f-85d9-98bf9be1c76b", 00:14:25.368 "assigned_rate_limits": { 00:14:25.368 "rw_ios_per_sec": 0, 00:14:25.368 "rw_mbytes_per_sec": 0, 00:14:25.368 "r_mbytes_per_sec": 0, 00:14:25.368 "w_mbytes_per_sec": 0 00:14:25.368 }, 00:14:25.368 "claimed": true, 00:14:25.368 "claim_type": "exclusive_write", 00:14:25.368 "zoned": false, 00:14:25.368 "supported_io_types": { 00:14:25.368 "read": true, 00:14:25.368 "write": true, 00:14:25.368 "unmap": true, 00:14:25.368 "flush": true, 00:14:25.368 "reset": true, 00:14:25.368 "nvme_admin": false, 00:14:25.368 "nvme_io": false, 00:14:25.368 "nvme_io_md": false, 00:14:25.368 "write_zeroes": true, 00:14:25.368 "zcopy": true, 00:14:25.368 "get_zone_info": false, 00:14:25.368 "zone_management": false, 00:14:25.368 "zone_append": false, 00:14:25.368 "compare": false, 00:14:25.368 "compare_and_write": false, 00:14:25.368 "abort": true, 00:14:25.368 "seek_hole": false, 00:14:25.368 "seek_data": false, 00:14:25.368 "copy": true, 00:14:25.368 "nvme_iov_md": false 00:14:25.368 }, 00:14:25.368 "memory_domains": [ 00:14:25.368 { 00:14:25.368 "dma_device_id": "system", 00:14:25.368 "dma_device_type": 1 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.368 "dma_device_type": 2 00:14:25.368 } 00:14:25.368 ], 00:14:25.368 "driver_specific": {} 00:14:25.368 } 00:14:25.368 ] 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.368 05:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.368 "name": "Existed_Raid", 00:14:25.368 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:25.368 "strip_size_kb": 64, 00:14:25.368 "state": "configuring", 00:14:25.368 "raid_level": "raid5f", 00:14:25.368 "superblock": true, 00:14:25.368 "num_base_bdevs": 4, 00:14:25.368 "num_base_bdevs_discovered": 3, 00:14:25.368 "num_base_bdevs_operational": 4, 00:14:25.368 "base_bdevs_list": [ 00:14:25.368 { 00:14:25.368 "name": "BaseBdev1", 00:14:25.368 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "name": "BaseBdev2", 00:14:25.368 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "name": "BaseBdev3", 00:14:25.368 "uuid": "97178239-b2a0-490f-85d9-98bf9be1c76b", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "name": "BaseBdev4", 00:14:25.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.368 "is_configured": false, 00:14:25.368 "data_offset": 0, 00:14:25.368 "data_size": 0 00:14:25.368 } 00:14:25.368 ] 00:14:25.368 }' 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.368 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.939 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:25.939 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 [2024-11-21 05:00:42.401080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.940 [2024-11-21 05:00:42.401378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:25.940 [2024-11-21 05:00:42.401406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:25.940 BaseBdev4 00:14:25.940 [2024-11-21 05:00:42.401782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:25.940 [2024-11-21 05:00:42.402366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:25.940 [2024-11-21 05:00:42.402385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:25.940 [2024-11-21 05:00:42.402607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 [ 00:14:25.940 { 00:14:25.940 "name": "BaseBdev4", 00:14:25.940 "aliases": [ 00:14:25.940 "3222545c-7953-4a0f-b5e3-47065ad1e904" 00:14:25.940 ], 00:14:25.940 "product_name": "Malloc disk", 00:14:25.940 "block_size": 512, 00:14:25.940 "num_blocks": 65536, 00:14:25.940 "uuid": "3222545c-7953-4a0f-b5e3-47065ad1e904", 00:14:25.940 "assigned_rate_limits": { 00:14:25.940 "rw_ios_per_sec": 0, 00:14:25.940 "rw_mbytes_per_sec": 0, 00:14:25.940 "r_mbytes_per_sec": 0, 00:14:25.940 "w_mbytes_per_sec": 0 00:14:25.940 }, 00:14:25.940 "claimed": true, 00:14:25.940 "claim_type": "exclusive_write", 00:14:25.940 "zoned": false, 00:14:25.940 "supported_io_types": { 00:14:25.940 "read": true, 00:14:25.940 "write": true, 00:14:25.940 "unmap": true, 00:14:25.940 "flush": true, 00:14:25.940 "reset": true, 00:14:25.940 "nvme_admin": false, 00:14:25.940 "nvme_io": false, 00:14:25.940 "nvme_io_md": false, 00:14:25.940 "write_zeroes": true, 00:14:25.940 "zcopy": true, 00:14:25.940 "get_zone_info": false, 00:14:25.940 "zone_management": false, 00:14:25.940 "zone_append": false, 00:14:25.940 "compare": false, 00:14:25.940 "compare_and_write": false, 00:14:25.940 "abort": true, 00:14:25.940 "seek_hole": false, 00:14:25.940 "seek_data": false, 00:14:25.940 "copy": true, 00:14:25.940 "nvme_iov_md": false 00:14:25.940 }, 00:14:25.940 "memory_domains": [ 00:14:25.940 { 00:14:25.940 "dma_device_id": "system", 00:14:25.940 "dma_device_type": 1 00:14:25.940 }, 00:14:25.940 { 00:14:25.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.940 "dma_device_type": 2 00:14:25.940 } 00:14:25.940 ], 00:14:25.940 "driver_specific": {} 00:14:25.940 } 00:14:25.940 ] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.940 "name": "Existed_Raid", 00:14:25.940 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:25.940 "strip_size_kb": 64, 00:14:25.940 "state": "online", 00:14:25.940 "raid_level": "raid5f", 00:14:25.940 "superblock": true, 00:14:25.940 "num_base_bdevs": 4, 00:14:25.940 "num_base_bdevs_discovered": 4, 00:14:25.940 "num_base_bdevs_operational": 4, 00:14:25.940 "base_bdevs_list": [ 00:14:25.940 { 00:14:25.940 "name": "BaseBdev1", 00:14:25.940 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:25.940 "is_configured": true, 00:14:25.940 "data_offset": 2048, 00:14:25.940 "data_size": 63488 00:14:25.940 }, 00:14:25.940 { 00:14:25.940 "name": "BaseBdev2", 00:14:25.940 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:25.940 "is_configured": true, 00:14:25.940 "data_offset": 2048, 00:14:25.940 "data_size": 63488 00:14:25.940 }, 00:14:25.940 { 00:14:25.940 "name": "BaseBdev3", 00:14:25.940 "uuid": "97178239-b2a0-490f-85d9-98bf9be1c76b", 00:14:25.940 "is_configured": true, 00:14:25.940 "data_offset": 2048, 00:14:25.940 "data_size": 63488 00:14:25.940 }, 00:14:25.940 { 00:14:25.940 "name": "BaseBdev4", 00:14:25.940 "uuid": "3222545c-7953-4a0f-b5e3-47065ad1e904", 00:14:25.940 "is_configured": true, 00:14:25.940 "data_offset": 2048, 00:14:25.940 "data_size": 63488 00:14:25.940 } 00:14:25.940 ] 00:14:25.940 }' 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.940 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.201 [2024-11-21 05:00:42.909084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.201 05:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.462 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.462 "name": "Existed_Raid", 00:14:26.462 "aliases": [ 00:14:26.462 "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6" 00:14:26.462 ], 00:14:26.462 "product_name": "Raid Volume", 00:14:26.462 "block_size": 512, 00:14:26.462 "num_blocks": 190464, 00:14:26.462 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:26.462 "assigned_rate_limits": { 00:14:26.462 "rw_ios_per_sec": 0, 00:14:26.462 "rw_mbytes_per_sec": 0, 00:14:26.462 "r_mbytes_per_sec": 0, 00:14:26.462 "w_mbytes_per_sec": 0 00:14:26.462 }, 00:14:26.462 "claimed": false, 00:14:26.462 "zoned": false, 00:14:26.462 "supported_io_types": { 00:14:26.462 "read": true, 00:14:26.462 "write": true, 00:14:26.462 "unmap": false, 00:14:26.462 "flush": false, 00:14:26.462 "reset": true, 00:14:26.462 "nvme_admin": false, 00:14:26.462 "nvme_io": false, 00:14:26.462 "nvme_io_md": false, 00:14:26.462 "write_zeroes": true, 00:14:26.462 "zcopy": false, 00:14:26.462 "get_zone_info": false, 00:14:26.462 "zone_management": false, 00:14:26.462 "zone_append": false, 00:14:26.462 "compare": false, 00:14:26.462 "compare_and_write": false, 00:14:26.462 "abort": false, 00:14:26.462 "seek_hole": false, 00:14:26.462 "seek_data": false, 00:14:26.462 "copy": false, 00:14:26.462 "nvme_iov_md": false 00:14:26.462 }, 00:14:26.462 "driver_specific": { 00:14:26.462 "raid": { 00:14:26.462 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:26.462 "strip_size_kb": 64, 00:14:26.462 "state": "online", 00:14:26.462 "raid_level": "raid5f", 00:14:26.462 "superblock": true, 00:14:26.462 "num_base_bdevs": 4, 00:14:26.462 "num_base_bdevs_discovered": 4, 00:14:26.462 "num_base_bdevs_operational": 4, 00:14:26.462 "base_bdevs_list": [ 00:14:26.462 { 00:14:26.462 "name": "BaseBdev1", 00:14:26.462 "uuid": "ed0c6025-0d0d-4487-b989-1ee3fd244f9e", 00:14:26.462 "is_configured": true, 00:14:26.462 "data_offset": 2048, 00:14:26.462 "data_size": 63488 00:14:26.462 }, 00:14:26.462 { 00:14:26.462 "name": "BaseBdev2", 00:14:26.462 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:26.462 "is_configured": true, 00:14:26.462 "data_offset": 2048, 00:14:26.462 "data_size": 63488 00:14:26.462 }, 00:14:26.462 { 00:14:26.462 "name": "BaseBdev3", 00:14:26.462 "uuid": "97178239-b2a0-490f-85d9-98bf9be1c76b", 00:14:26.462 "is_configured": true, 00:14:26.462 "data_offset": 2048, 00:14:26.462 "data_size": 63488 00:14:26.462 }, 00:14:26.462 { 00:14:26.462 "name": "BaseBdev4", 00:14:26.462 "uuid": "3222545c-7953-4a0f-b5e3-47065ad1e904", 00:14:26.462 "is_configured": true, 00:14:26.462 "data_offset": 2048, 00:14:26.462 "data_size": 63488 00:14:26.462 } 00:14:26.462 ] 00:14:26.462 } 00:14:26.462 } 00:14:26.462 }' 00:14:26.462 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.462 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:26.462 BaseBdev2 00:14:26.462 BaseBdev3 00:14:26.462 BaseBdev4' 00:14:26.462 05:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.462 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.723 [2024-11-21 05:00:43.240298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.723 "name": "Existed_Raid", 00:14:26.723 "uuid": "0c4a1bc7-37fe-47b0-9a42-4b9a383cfed6", 00:14:26.723 "strip_size_kb": 64, 00:14:26.723 "state": "online", 00:14:26.723 "raid_level": "raid5f", 00:14:26.723 "superblock": true, 00:14:26.723 "num_base_bdevs": 4, 00:14:26.723 "num_base_bdevs_discovered": 3, 00:14:26.723 "num_base_bdevs_operational": 3, 00:14:26.723 "base_bdevs_list": [ 00:14:26.723 { 00:14:26.723 "name": null, 00:14:26.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.723 "is_configured": false, 00:14:26.723 "data_offset": 0, 00:14:26.723 "data_size": 63488 00:14:26.723 }, 00:14:26.723 { 00:14:26.723 "name": "BaseBdev2", 00:14:26.723 "uuid": "849f3a58-0844-49e4-bd7e-24ed32524dcf", 00:14:26.723 "is_configured": true, 00:14:26.723 "data_offset": 2048, 00:14:26.723 "data_size": 63488 00:14:26.723 }, 00:14:26.723 { 00:14:26.723 "name": "BaseBdev3", 00:14:26.723 "uuid": "97178239-b2a0-490f-85d9-98bf9be1c76b", 00:14:26.723 "is_configured": true, 00:14:26.723 "data_offset": 2048, 00:14:26.723 "data_size": 63488 00:14:26.723 }, 00:14:26.723 { 00:14:26.723 "name": "BaseBdev4", 00:14:26.723 "uuid": "3222545c-7953-4a0f-b5e3-47065ad1e904", 00:14:26.723 "is_configured": true, 00:14:26.723 "data_offset": 2048, 00:14:26.723 "data_size": 63488 00:14:26.723 } 00:14:26.723 ] 00:14:26.723 }' 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.723 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.983 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 [2024-11-21 05:00:43.732394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.243 [2024-11-21 05:00:43.732758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.243 [2024-11-21 05:00:43.753694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 [2024-11-21 05:00:43.809646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 [2024-11-21 05:00:43.886346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:27.243 [2024-11-21 05:00:43.886502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.243 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.504 BaseBdev2 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.504 05:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.504 [ 00:14:27.504 { 00:14:27.504 "name": "BaseBdev2", 00:14:27.504 "aliases": [ 00:14:27.504 "1dbbdc80-ca95-4dde-91ce-71d2e812c321" 00:14:27.504 ], 00:14:27.504 "product_name": "Malloc disk", 00:14:27.504 "block_size": 512, 00:14:27.504 "num_blocks": 65536, 00:14:27.504 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:27.504 "assigned_rate_limits": { 00:14:27.504 "rw_ios_per_sec": 0, 00:14:27.504 "rw_mbytes_per_sec": 0, 00:14:27.504 "r_mbytes_per_sec": 0, 00:14:27.504 "w_mbytes_per_sec": 0 00:14:27.504 }, 00:14:27.504 "claimed": false, 00:14:27.504 "zoned": false, 00:14:27.504 "supported_io_types": { 00:14:27.504 "read": true, 00:14:27.504 "write": true, 00:14:27.504 "unmap": true, 00:14:27.504 "flush": true, 00:14:27.504 "reset": true, 00:14:27.504 "nvme_admin": false, 00:14:27.504 "nvme_io": false, 00:14:27.504 "nvme_io_md": false, 00:14:27.504 "write_zeroes": true, 00:14:27.504 "zcopy": true, 00:14:27.504 "get_zone_info": false, 00:14:27.504 "zone_management": false, 00:14:27.504 "zone_append": false, 00:14:27.504 "compare": false, 00:14:27.504 "compare_and_write": false, 00:14:27.504 "abort": true, 00:14:27.504 "seek_hole": false, 00:14:27.504 "seek_data": false, 00:14:27.504 "copy": true, 00:14:27.504 "nvme_iov_md": false 00:14:27.504 }, 00:14:27.504 "memory_domains": [ 00:14:27.504 { 00:14:27.504 "dma_device_id": "system", 00:14:27.504 "dma_device_type": 1 00:14:27.504 }, 00:14:27.504 { 00:14:27.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.504 "dma_device_type": 2 00:14:27.504 } 00:14:27.504 ], 00:14:27.504 "driver_specific": {} 00:14:27.504 } 00:14:27.504 ] 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.504 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.504 BaseBdev3 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 [ 00:14:27.505 { 00:14:27.505 "name": "BaseBdev3", 00:14:27.505 "aliases": [ 00:14:27.505 "18f173ee-226d-4222-81a3-9e9ab8483e06" 00:14:27.505 ], 00:14:27.505 "product_name": "Malloc disk", 00:14:27.505 "block_size": 512, 00:14:27.505 "num_blocks": 65536, 00:14:27.505 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:27.505 "assigned_rate_limits": { 00:14:27.505 "rw_ios_per_sec": 0, 00:14:27.505 "rw_mbytes_per_sec": 0, 00:14:27.505 "r_mbytes_per_sec": 0, 00:14:27.505 "w_mbytes_per_sec": 0 00:14:27.505 }, 00:14:27.505 "claimed": false, 00:14:27.505 "zoned": false, 00:14:27.505 "supported_io_types": { 00:14:27.505 "read": true, 00:14:27.505 "write": true, 00:14:27.505 "unmap": true, 00:14:27.505 "flush": true, 00:14:27.505 "reset": true, 00:14:27.505 "nvme_admin": false, 00:14:27.505 "nvme_io": false, 00:14:27.505 "nvme_io_md": false, 00:14:27.505 "write_zeroes": true, 00:14:27.505 "zcopy": true, 00:14:27.505 "get_zone_info": false, 00:14:27.505 "zone_management": false, 00:14:27.505 "zone_append": false, 00:14:27.505 "compare": false, 00:14:27.505 "compare_and_write": false, 00:14:27.505 "abort": true, 00:14:27.505 "seek_hole": false, 00:14:27.505 "seek_data": false, 00:14:27.505 "copy": true, 00:14:27.505 "nvme_iov_md": false 00:14:27.505 }, 00:14:27.505 "memory_domains": [ 00:14:27.505 { 00:14:27.505 "dma_device_id": "system", 00:14:27.505 "dma_device_type": 1 00:14:27.505 }, 00:14:27.505 { 00:14:27.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.505 "dma_device_type": 2 00:14:27.505 } 00:14:27.505 ], 00:14:27.505 "driver_specific": {} 00:14:27.505 } 00:14:27.505 ] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 BaseBdev4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 [ 00:14:27.505 { 00:14:27.505 "name": "BaseBdev4", 00:14:27.505 "aliases": [ 00:14:27.505 "56642e14-b9ea-4a4a-8279-4638680356d1" 00:14:27.505 ], 00:14:27.505 "product_name": "Malloc disk", 00:14:27.505 "block_size": 512, 00:14:27.505 "num_blocks": 65536, 00:14:27.505 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:27.505 "assigned_rate_limits": { 00:14:27.505 "rw_ios_per_sec": 0, 00:14:27.505 "rw_mbytes_per_sec": 0, 00:14:27.505 "r_mbytes_per_sec": 0, 00:14:27.505 "w_mbytes_per_sec": 0 00:14:27.505 }, 00:14:27.505 "claimed": false, 00:14:27.505 "zoned": false, 00:14:27.505 "supported_io_types": { 00:14:27.505 "read": true, 00:14:27.505 "write": true, 00:14:27.505 "unmap": true, 00:14:27.505 "flush": true, 00:14:27.505 "reset": true, 00:14:27.505 "nvme_admin": false, 00:14:27.505 "nvme_io": false, 00:14:27.505 "nvme_io_md": false, 00:14:27.505 "write_zeroes": true, 00:14:27.505 "zcopy": true, 00:14:27.505 "get_zone_info": false, 00:14:27.505 "zone_management": false, 00:14:27.505 "zone_append": false, 00:14:27.505 "compare": false, 00:14:27.505 "compare_and_write": false, 00:14:27.505 "abort": true, 00:14:27.505 "seek_hole": false, 00:14:27.505 "seek_data": false, 00:14:27.505 "copy": true, 00:14:27.505 "nvme_iov_md": false 00:14:27.505 }, 00:14:27.505 "memory_domains": [ 00:14:27.505 { 00:14:27.505 "dma_device_id": "system", 00:14:27.505 "dma_device_type": 1 00:14:27.505 }, 00:14:27.505 { 00:14:27.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.505 "dma_device_type": 2 00:14:27.505 } 00:14:27.505 ], 00:14:27.505 "driver_specific": {} 00:14:27.505 } 00:14:27.505 ] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 [2024-11-21 05:00:44.141558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.505 [2024-11-21 05:00:44.141694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.505 [2024-11-21 05:00:44.141746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.505 [2024-11-21 05:00:44.144013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.505 [2024-11-21 05:00:44.144138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.505 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.505 "name": "Existed_Raid", 00:14:27.506 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:27.506 "strip_size_kb": 64, 00:14:27.506 "state": "configuring", 00:14:27.506 "raid_level": "raid5f", 00:14:27.506 "superblock": true, 00:14:27.506 "num_base_bdevs": 4, 00:14:27.506 "num_base_bdevs_discovered": 3, 00:14:27.506 "num_base_bdevs_operational": 4, 00:14:27.506 "base_bdevs_list": [ 00:14:27.506 { 00:14:27.506 "name": "BaseBdev1", 00:14:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.506 "is_configured": false, 00:14:27.506 "data_offset": 0, 00:14:27.506 "data_size": 0 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev2", 00:14:27.506 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev3", 00:14:27.506 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev4", 00:14:27.506 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 } 00:14:27.506 ] 00:14:27.506 }' 00:14:27.506 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.506 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 [2024-11-21 05:00:44.599572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.077 "name": "Existed_Raid", 00:14:28.077 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:28.077 "strip_size_kb": 64, 00:14:28.077 "state": "configuring", 00:14:28.077 "raid_level": "raid5f", 00:14:28.077 "superblock": true, 00:14:28.077 "num_base_bdevs": 4, 00:14:28.077 "num_base_bdevs_discovered": 2, 00:14:28.077 "num_base_bdevs_operational": 4, 00:14:28.077 "base_bdevs_list": [ 00:14:28.077 { 00:14:28.077 "name": "BaseBdev1", 00:14:28.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.077 "is_configured": false, 00:14:28.077 "data_offset": 0, 00:14:28.077 "data_size": 0 00:14:28.077 }, 00:14:28.077 { 00:14:28.077 "name": null, 00:14:28.077 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:28.077 "is_configured": false, 00:14:28.077 "data_offset": 0, 00:14:28.077 "data_size": 63488 00:14:28.077 }, 00:14:28.077 { 00:14:28.077 "name": "BaseBdev3", 00:14:28.077 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:28.077 "is_configured": true, 00:14:28.077 "data_offset": 2048, 00:14:28.077 "data_size": 63488 00:14:28.077 }, 00:14:28.077 { 00:14:28.077 "name": "BaseBdev4", 00:14:28.077 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:28.077 "is_configured": true, 00:14:28.077 "data_offset": 2048, 00:14:28.077 "data_size": 63488 00:14:28.077 } 00:14:28.077 ] 00:14:28.077 }' 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.077 05:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:28.667 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.668 [2024-11-21 05:00:45.143847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.668 BaseBdev1 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.668 [ 00:14:28.668 { 00:14:28.668 "name": "BaseBdev1", 00:14:28.668 "aliases": [ 00:14:28.668 "5878930b-4081-4a79-9cc7-d9a10e6ca72b" 00:14:28.668 ], 00:14:28.668 "product_name": "Malloc disk", 00:14:28.668 "block_size": 512, 00:14:28.668 "num_blocks": 65536, 00:14:28.668 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:28.668 "assigned_rate_limits": { 00:14:28.668 "rw_ios_per_sec": 0, 00:14:28.668 "rw_mbytes_per_sec": 0, 00:14:28.668 "r_mbytes_per_sec": 0, 00:14:28.668 "w_mbytes_per_sec": 0 00:14:28.668 }, 00:14:28.668 "claimed": true, 00:14:28.668 "claim_type": "exclusive_write", 00:14:28.668 "zoned": false, 00:14:28.668 "supported_io_types": { 00:14:28.668 "read": true, 00:14:28.668 "write": true, 00:14:28.668 "unmap": true, 00:14:28.668 "flush": true, 00:14:28.668 "reset": true, 00:14:28.668 "nvme_admin": false, 00:14:28.668 "nvme_io": false, 00:14:28.668 "nvme_io_md": false, 00:14:28.668 "write_zeroes": true, 00:14:28.668 "zcopy": true, 00:14:28.668 "get_zone_info": false, 00:14:28.668 "zone_management": false, 00:14:28.668 "zone_append": false, 00:14:28.668 "compare": false, 00:14:28.668 "compare_and_write": false, 00:14:28.668 "abort": true, 00:14:28.668 "seek_hole": false, 00:14:28.668 "seek_data": false, 00:14:28.668 "copy": true, 00:14:28.668 "nvme_iov_md": false 00:14:28.668 }, 00:14:28.668 "memory_domains": [ 00:14:28.668 { 00:14:28.668 "dma_device_id": "system", 00:14:28.668 "dma_device_type": 1 00:14:28.668 }, 00:14:28.668 { 00:14:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.668 "dma_device_type": 2 00:14:28.668 } 00:14:28.668 ], 00:14:28.668 "driver_specific": {} 00:14:28.668 } 00:14:28.668 ] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.668 "name": "Existed_Raid", 00:14:28.668 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:28.668 "strip_size_kb": 64, 00:14:28.668 "state": "configuring", 00:14:28.668 "raid_level": "raid5f", 00:14:28.668 "superblock": true, 00:14:28.668 "num_base_bdevs": 4, 00:14:28.668 "num_base_bdevs_discovered": 3, 00:14:28.668 "num_base_bdevs_operational": 4, 00:14:28.668 "base_bdevs_list": [ 00:14:28.668 { 00:14:28.668 "name": "BaseBdev1", 00:14:28.668 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:28.668 "is_configured": true, 00:14:28.668 "data_offset": 2048, 00:14:28.668 "data_size": 63488 00:14:28.668 }, 00:14:28.668 { 00:14:28.668 "name": null, 00:14:28.668 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:28.668 "is_configured": false, 00:14:28.668 "data_offset": 0, 00:14:28.668 "data_size": 63488 00:14:28.668 }, 00:14:28.668 { 00:14:28.668 "name": "BaseBdev3", 00:14:28.668 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:28.668 "is_configured": true, 00:14:28.668 "data_offset": 2048, 00:14:28.668 "data_size": 63488 00:14:28.668 }, 00:14:28.668 { 00:14:28.668 "name": "BaseBdev4", 00:14:28.668 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:28.668 "is_configured": true, 00:14:28.668 "data_offset": 2048, 00:14:28.668 "data_size": 63488 00:14:28.668 } 00:14:28.668 ] 00:14:28.668 }' 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.668 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.944 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.204 [2024-11-21 05:00:45.679361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.204 "name": "Existed_Raid", 00:14:29.204 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:29.204 "strip_size_kb": 64, 00:14:29.204 "state": "configuring", 00:14:29.204 "raid_level": "raid5f", 00:14:29.204 "superblock": true, 00:14:29.204 "num_base_bdevs": 4, 00:14:29.204 "num_base_bdevs_discovered": 2, 00:14:29.204 "num_base_bdevs_operational": 4, 00:14:29.204 "base_bdevs_list": [ 00:14:29.204 { 00:14:29.204 "name": "BaseBdev1", 00:14:29.204 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:29.204 "is_configured": true, 00:14:29.204 "data_offset": 2048, 00:14:29.204 "data_size": 63488 00:14:29.204 }, 00:14:29.204 { 00:14:29.204 "name": null, 00:14:29.204 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:29.204 "is_configured": false, 00:14:29.204 "data_offset": 0, 00:14:29.204 "data_size": 63488 00:14:29.204 }, 00:14:29.204 { 00:14:29.204 "name": null, 00:14:29.204 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:29.204 "is_configured": false, 00:14:29.204 "data_offset": 0, 00:14:29.204 "data_size": 63488 00:14:29.204 }, 00:14:29.204 { 00:14:29.204 "name": "BaseBdev4", 00:14:29.204 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:29.204 "is_configured": true, 00:14:29.204 "data_offset": 2048, 00:14:29.204 "data_size": 63488 00:14:29.204 } 00:14:29.204 ] 00:14:29.204 }' 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.204 05:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.463 [2024-11-21 05:00:46.170580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.463 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.723 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.723 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.723 "name": "Existed_Raid", 00:14:29.723 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:29.723 "strip_size_kb": 64, 00:14:29.723 "state": "configuring", 00:14:29.723 "raid_level": "raid5f", 00:14:29.723 "superblock": true, 00:14:29.723 "num_base_bdevs": 4, 00:14:29.723 "num_base_bdevs_discovered": 3, 00:14:29.723 "num_base_bdevs_operational": 4, 00:14:29.723 "base_bdevs_list": [ 00:14:29.723 { 00:14:29.723 "name": "BaseBdev1", 00:14:29.723 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:29.723 "is_configured": true, 00:14:29.723 "data_offset": 2048, 00:14:29.723 "data_size": 63488 00:14:29.723 }, 00:14:29.723 { 00:14:29.723 "name": null, 00:14:29.723 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:29.723 "is_configured": false, 00:14:29.723 "data_offset": 0, 00:14:29.723 "data_size": 63488 00:14:29.723 }, 00:14:29.723 { 00:14:29.723 "name": "BaseBdev3", 00:14:29.723 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:29.723 "is_configured": true, 00:14:29.723 "data_offset": 2048, 00:14:29.723 "data_size": 63488 00:14:29.723 }, 00:14:29.723 { 00:14:29.723 "name": "BaseBdev4", 00:14:29.723 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:29.723 "is_configured": true, 00:14:29.723 "data_offset": 2048, 00:14:29.723 "data_size": 63488 00:14:29.723 } 00:14:29.723 ] 00:14:29.723 }' 00:14:29.723 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.723 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.982 [2024-11-21 05:00:46.677726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.982 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.242 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.242 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.242 "name": "Existed_Raid", 00:14:30.242 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:30.242 "strip_size_kb": 64, 00:14:30.242 "state": "configuring", 00:14:30.242 "raid_level": "raid5f", 00:14:30.242 "superblock": true, 00:14:30.242 "num_base_bdevs": 4, 00:14:30.242 "num_base_bdevs_discovered": 2, 00:14:30.242 "num_base_bdevs_operational": 4, 00:14:30.242 "base_bdevs_list": [ 00:14:30.242 { 00:14:30.242 "name": null, 00:14:30.242 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:30.242 "is_configured": false, 00:14:30.242 "data_offset": 0, 00:14:30.242 "data_size": 63488 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "name": null, 00:14:30.242 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:30.242 "is_configured": false, 00:14:30.242 "data_offset": 0, 00:14:30.242 "data_size": 63488 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "name": "BaseBdev3", 00:14:30.242 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:30.242 "is_configured": true, 00:14:30.242 "data_offset": 2048, 00:14:30.242 "data_size": 63488 00:14:30.242 }, 00:14:30.242 { 00:14:30.242 "name": "BaseBdev4", 00:14:30.242 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:30.242 "is_configured": true, 00:14:30.242 "data_offset": 2048, 00:14:30.242 "data_size": 63488 00:14:30.242 } 00:14:30.242 ] 00:14:30.242 }' 00:14:30.242 05:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.242 05:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.501 [2024-11-21 05:00:47.176840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.501 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.501 "name": "Existed_Raid", 00:14:30.501 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:30.501 "strip_size_kb": 64, 00:14:30.501 "state": "configuring", 00:14:30.501 "raid_level": "raid5f", 00:14:30.501 "superblock": true, 00:14:30.501 "num_base_bdevs": 4, 00:14:30.501 "num_base_bdevs_discovered": 3, 00:14:30.501 "num_base_bdevs_operational": 4, 00:14:30.501 "base_bdevs_list": [ 00:14:30.501 { 00:14:30.501 "name": null, 00:14:30.501 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:30.501 "is_configured": false, 00:14:30.501 "data_offset": 0, 00:14:30.501 "data_size": 63488 00:14:30.501 }, 00:14:30.501 { 00:14:30.501 "name": "BaseBdev2", 00:14:30.501 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:30.501 "is_configured": true, 00:14:30.501 "data_offset": 2048, 00:14:30.501 "data_size": 63488 00:14:30.501 }, 00:14:30.502 { 00:14:30.502 "name": "BaseBdev3", 00:14:30.502 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:30.502 "is_configured": true, 00:14:30.502 "data_offset": 2048, 00:14:30.502 "data_size": 63488 00:14:30.502 }, 00:14:30.502 { 00:14:30.502 "name": "BaseBdev4", 00:14:30.502 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:30.502 "is_configured": true, 00:14:30.502 "data_offset": 2048, 00:14:30.502 "data_size": 63488 00:14:30.502 } 00:14:30.502 ] 00:14:30.502 }' 00:14:30.502 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.502 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5878930b-4081-4a79-9cc7-d9a10e6ca72b 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 [2024-11-21 05:00:47.737945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:31.071 [2024-11-21 05:00:47.738209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:31.071 [2024-11-21 05:00:47.738227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:31.071 [2024-11-21 05:00:47.738612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:31.071 NewBaseBdev 00:14:31.071 [2024-11-21 05:00:47.739264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:31.071 [2024-11-21 05:00:47.739288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:31.071 [2024-11-21 05:00:47.739461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.071 [ 00:14:31.071 { 00:14:31.071 "name": "NewBaseBdev", 00:14:31.071 "aliases": [ 00:14:31.071 "5878930b-4081-4a79-9cc7-d9a10e6ca72b" 00:14:31.071 ], 00:14:31.071 "product_name": "Malloc disk", 00:14:31.071 "block_size": 512, 00:14:31.071 "num_blocks": 65536, 00:14:31.071 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:31.071 "assigned_rate_limits": { 00:14:31.071 "rw_ios_per_sec": 0, 00:14:31.071 "rw_mbytes_per_sec": 0, 00:14:31.071 "r_mbytes_per_sec": 0, 00:14:31.071 "w_mbytes_per_sec": 0 00:14:31.071 }, 00:14:31.071 "claimed": true, 00:14:31.071 "claim_type": "exclusive_write", 00:14:31.071 "zoned": false, 00:14:31.071 "supported_io_types": { 00:14:31.071 "read": true, 00:14:31.071 "write": true, 00:14:31.071 "unmap": true, 00:14:31.071 "flush": true, 00:14:31.071 "reset": true, 00:14:31.071 "nvme_admin": false, 00:14:31.071 "nvme_io": false, 00:14:31.071 "nvme_io_md": false, 00:14:31.071 "write_zeroes": true, 00:14:31.071 "zcopy": true, 00:14:31.071 "get_zone_info": false, 00:14:31.071 "zone_management": false, 00:14:31.071 "zone_append": false, 00:14:31.071 "compare": false, 00:14:31.071 "compare_and_write": false, 00:14:31.071 "abort": true, 00:14:31.071 "seek_hole": false, 00:14:31.071 "seek_data": false, 00:14:31.071 "copy": true, 00:14:31.071 "nvme_iov_md": false 00:14:31.071 }, 00:14:31.071 "memory_domains": [ 00:14:31.071 { 00:14:31.071 "dma_device_id": "system", 00:14:31.071 "dma_device_type": 1 00:14:31.071 }, 00:14:31.071 { 00:14:31.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.071 "dma_device_type": 2 00:14:31.071 } 00:14:31.071 ], 00:14:31.071 "driver_specific": {} 00:14:31.071 } 00:14:31.071 ] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.071 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.072 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.331 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.331 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.331 "name": "Existed_Raid", 00:14:31.331 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:31.331 "strip_size_kb": 64, 00:14:31.331 "state": "online", 00:14:31.331 "raid_level": "raid5f", 00:14:31.331 "superblock": true, 00:14:31.331 "num_base_bdevs": 4, 00:14:31.331 "num_base_bdevs_discovered": 4, 00:14:31.331 "num_base_bdevs_operational": 4, 00:14:31.331 "base_bdevs_list": [ 00:14:31.331 { 00:14:31.331 "name": "NewBaseBdev", 00:14:31.331 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:31.331 "is_configured": true, 00:14:31.331 "data_offset": 2048, 00:14:31.331 "data_size": 63488 00:14:31.331 }, 00:14:31.331 { 00:14:31.331 "name": "BaseBdev2", 00:14:31.331 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:31.331 "is_configured": true, 00:14:31.331 "data_offset": 2048, 00:14:31.331 "data_size": 63488 00:14:31.331 }, 00:14:31.331 { 00:14:31.331 "name": "BaseBdev3", 00:14:31.331 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:31.331 "is_configured": true, 00:14:31.331 "data_offset": 2048, 00:14:31.331 "data_size": 63488 00:14:31.331 }, 00:14:31.331 { 00:14:31.331 "name": "BaseBdev4", 00:14:31.331 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:31.331 "is_configured": true, 00:14:31.331 "data_offset": 2048, 00:14:31.331 "data_size": 63488 00:14:31.331 } 00:14:31.331 ] 00:14:31.331 }' 00:14:31.331 05:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.331 05:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.591 [2024-11-21 05:00:48.265767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.591 "name": "Existed_Raid", 00:14:31.591 "aliases": [ 00:14:31.591 "0acbc19e-56a2-4145-8eee-a13109bbb450" 00:14:31.591 ], 00:14:31.591 "product_name": "Raid Volume", 00:14:31.591 "block_size": 512, 00:14:31.591 "num_blocks": 190464, 00:14:31.591 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:31.591 "assigned_rate_limits": { 00:14:31.591 "rw_ios_per_sec": 0, 00:14:31.591 "rw_mbytes_per_sec": 0, 00:14:31.591 "r_mbytes_per_sec": 0, 00:14:31.591 "w_mbytes_per_sec": 0 00:14:31.591 }, 00:14:31.591 "claimed": false, 00:14:31.591 "zoned": false, 00:14:31.591 "supported_io_types": { 00:14:31.591 "read": true, 00:14:31.591 "write": true, 00:14:31.591 "unmap": false, 00:14:31.591 "flush": false, 00:14:31.591 "reset": true, 00:14:31.591 "nvme_admin": false, 00:14:31.591 "nvme_io": false, 00:14:31.591 "nvme_io_md": false, 00:14:31.591 "write_zeroes": true, 00:14:31.591 "zcopy": false, 00:14:31.591 "get_zone_info": false, 00:14:31.591 "zone_management": false, 00:14:31.591 "zone_append": false, 00:14:31.591 "compare": false, 00:14:31.591 "compare_and_write": false, 00:14:31.591 "abort": false, 00:14:31.591 "seek_hole": false, 00:14:31.591 "seek_data": false, 00:14:31.591 "copy": false, 00:14:31.591 "nvme_iov_md": false 00:14:31.591 }, 00:14:31.591 "driver_specific": { 00:14:31.591 "raid": { 00:14:31.591 "uuid": "0acbc19e-56a2-4145-8eee-a13109bbb450", 00:14:31.591 "strip_size_kb": 64, 00:14:31.591 "state": "online", 00:14:31.591 "raid_level": "raid5f", 00:14:31.591 "superblock": true, 00:14:31.591 "num_base_bdevs": 4, 00:14:31.591 "num_base_bdevs_discovered": 4, 00:14:31.591 "num_base_bdevs_operational": 4, 00:14:31.591 "base_bdevs_list": [ 00:14:31.591 { 00:14:31.591 "name": "NewBaseBdev", 00:14:31.591 "uuid": "5878930b-4081-4a79-9cc7-d9a10e6ca72b", 00:14:31.591 "is_configured": true, 00:14:31.591 "data_offset": 2048, 00:14:31.591 "data_size": 63488 00:14:31.591 }, 00:14:31.591 { 00:14:31.591 "name": "BaseBdev2", 00:14:31.591 "uuid": "1dbbdc80-ca95-4dde-91ce-71d2e812c321", 00:14:31.591 "is_configured": true, 00:14:31.591 "data_offset": 2048, 00:14:31.591 "data_size": 63488 00:14:31.591 }, 00:14:31.591 { 00:14:31.591 "name": "BaseBdev3", 00:14:31.591 "uuid": "18f173ee-226d-4222-81a3-9e9ab8483e06", 00:14:31.591 "is_configured": true, 00:14:31.591 "data_offset": 2048, 00:14:31.591 "data_size": 63488 00:14:31.591 }, 00:14:31.591 { 00:14:31.591 "name": "BaseBdev4", 00:14:31.591 "uuid": "56642e14-b9ea-4a4a-8279-4638680356d1", 00:14:31.591 "is_configured": true, 00:14:31.591 "data_offset": 2048, 00:14:31.591 "data_size": 63488 00:14:31.591 } 00:14:31.591 ] 00:14:31.591 } 00:14:31.591 } 00:14:31.591 }' 00:14:31.591 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:31.851 BaseBdev2 00:14:31.851 BaseBdev3 00:14:31.851 BaseBdev4' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.851 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.112 [2024-11-21 05:00:48.593026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.112 [2024-11-21 05:00:48.593063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.112 [2024-11-21 05:00:48.593192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.112 [2024-11-21 05:00:48.593552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.112 [2024-11-21 05:00:48.593575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94029 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94029 ']' 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 94029 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94029 00:14:32.112 killing process with pid 94029 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94029' 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 94029 00:14:32.112 [2024-11-21 05:00:48.633887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.112 05:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 94029 00:14:32.112 [2024-11-21 05:00:48.720877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.372 05:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:32.372 00:14:32.372 real 0m9.954s 00:14:32.372 user 0m16.454s 00:14:32.372 sys 0m2.233s 00:14:32.372 05:00:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.372 05:00:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.372 ************************************ 00:14:32.372 END TEST raid5f_state_function_test_sb 00:14:32.372 ************************************ 00:14:32.644 05:00:49 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:32.644 05:00:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:32.644 05:00:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.644 05:00:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.644 ************************************ 00:14:32.644 START TEST raid5f_superblock_test 00:14:32.644 ************************************ 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94686 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:32.644 05:00:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94686 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 94686 ']' 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.645 05:00:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.645 [2024-11-21 05:00:49.240461] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:32.645 [2024-11-21 05:00:49.240587] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94686 ] 00:14:32.905 [2024-11-21 05:00:49.413726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.905 [2024-11-21 05:00:49.452891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.905 [2024-11-21 05:00:49.535519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.905 [2024-11-21 05:00:49.535567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 malloc1 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-11-21 05:00:50.095407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.474 [2024-11-21 05:00:50.095498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.474 [2024-11-21 05:00:50.095521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.474 [2024-11-21 05:00:50.095546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.474 [2024-11-21 05:00:50.097629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.474 [2024-11-21 05:00:50.097663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.474 pt1 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 malloc2 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-11-21 05:00:50.123863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.474 [2024-11-21 05:00:50.123922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.474 [2024-11-21 05:00:50.123939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:33.474 [2024-11-21 05:00:50.123949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.474 [2024-11-21 05:00:50.126063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.474 [2024-11-21 05:00:50.126132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.474 pt2 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.474 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.475 malloc3 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.475 [2024-11-21 05:00:50.152707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.475 [2024-11-21 05:00:50.152752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.475 [2024-11-21 05:00:50.152769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:33.475 [2024-11-21 05:00:50.152779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.475 [2024-11-21 05:00:50.154789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.475 [2024-11-21 05:00:50.154822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.475 pt3 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.475 malloc4 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.475 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.475 [2024-11-21 05:00:50.202991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:33.475 [2024-11-21 05:00:50.203123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.475 [2024-11-21 05:00:50.203164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:33.475 [2024-11-21 05:00:50.203196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.735 [2024-11-21 05:00:50.207174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.735 [2024-11-21 05:00:50.207226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:33.735 pt4 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 [2024-11-21 05:00:50.215477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:33.735 [2024-11-21 05:00:50.217824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.735 [2024-11-21 05:00:50.217898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:33.735 [2024-11-21 05:00:50.217974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:33.735 [2024-11-21 05:00:50.218204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:33.735 [2024-11-21 05:00:50.218231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:33.735 [2024-11-21 05:00:50.218564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:33.735 [2024-11-21 05:00:50.219177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:33.735 [2024-11-21 05:00:50.219202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:33.735 [2024-11-21 05:00:50.219429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.735 "name": "raid_bdev1", 00:14:33.735 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:33.735 "strip_size_kb": 64, 00:14:33.735 "state": "online", 00:14:33.735 "raid_level": "raid5f", 00:14:33.735 "superblock": true, 00:14:33.735 "num_base_bdevs": 4, 00:14:33.735 "num_base_bdevs_discovered": 4, 00:14:33.735 "num_base_bdevs_operational": 4, 00:14:33.735 "base_bdevs_list": [ 00:14:33.735 { 00:14:33.735 "name": "pt1", 00:14:33.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "pt2", 00:14:33.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "pt3", 00:14:33.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "pt4", 00:14:33.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 } 00:14:33.735 ] 00:14:33.735 }' 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.735 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.994 [2024-11-21 05:00:50.615148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.994 "name": "raid_bdev1", 00:14:33.994 "aliases": [ 00:14:33.994 "24bead92-17c0-455a-a5a4-944ea6cee03a" 00:14:33.994 ], 00:14:33.994 "product_name": "Raid Volume", 00:14:33.994 "block_size": 512, 00:14:33.994 "num_blocks": 190464, 00:14:33.994 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:33.994 "assigned_rate_limits": { 00:14:33.994 "rw_ios_per_sec": 0, 00:14:33.994 "rw_mbytes_per_sec": 0, 00:14:33.994 "r_mbytes_per_sec": 0, 00:14:33.994 "w_mbytes_per_sec": 0 00:14:33.994 }, 00:14:33.994 "claimed": false, 00:14:33.994 "zoned": false, 00:14:33.994 "supported_io_types": { 00:14:33.994 "read": true, 00:14:33.994 "write": true, 00:14:33.994 "unmap": false, 00:14:33.994 "flush": false, 00:14:33.994 "reset": true, 00:14:33.994 "nvme_admin": false, 00:14:33.994 "nvme_io": false, 00:14:33.994 "nvme_io_md": false, 00:14:33.994 "write_zeroes": true, 00:14:33.994 "zcopy": false, 00:14:33.994 "get_zone_info": false, 00:14:33.994 "zone_management": false, 00:14:33.994 "zone_append": false, 00:14:33.994 "compare": false, 00:14:33.994 "compare_and_write": false, 00:14:33.994 "abort": false, 00:14:33.994 "seek_hole": false, 00:14:33.994 "seek_data": false, 00:14:33.994 "copy": false, 00:14:33.994 "nvme_iov_md": false 00:14:33.994 }, 00:14:33.994 "driver_specific": { 00:14:33.994 "raid": { 00:14:33.994 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:33.994 "strip_size_kb": 64, 00:14:33.994 "state": "online", 00:14:33.994 "raid_level": "raid5f", 00:14:33.994 "superblock": true, 00:14:33.994 "num_base_bdevs": 4, 00:14:33.994 "num_base_bdevs_discovered": 4, 00:14:33.994 "num_base_bdevs_operational": 4, 00:14:33.994 "base_bdevs_list": [ 00:14:33.994 { 00:14:33.994 "name": "pt1", 00:14:33.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.994 "is_configured": true, 00:14:33.994 "data_offset": 2048, 00:14:33.994 "data_size": 63488 00:14:33.994 }, 00:14:33.994 { 00:14:33.994 "name": "pt2", 00:14:33.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.994 "is_configured": true, 00:14:33.994 "data_offset": 2048, 00:14:33.994 "data_size": 63488 00:14:33.994 }, 00:14:33.994 { 00:14:33.994 "name": "pt3", 00:14:33.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.994 "is_configured": true, 00:14:33.994 "data_offset": 2048, 00:14:33.994 "data_size": 63488 00:14:33.994 }, 00:14:33.994 { 00:14:33.994 "name": "pt4", 00:14:33.994 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:33.994 "is_configured": true, 00:14:33.994 "data_offset": 2048, 00:14:33.994 "data_size": 63488 00:14:33.994 } 00:14:33.994 ] 00:14:33.994 } 00:14:33.994 } 00:14:33.994 }' 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:33.994 pt2 00:14:33.994 pt3 00:14:33.994 pt4' 00:14:33.994 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.254 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.255 [2024-11-21 05:00:50.930617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24bead92-17c0-455a-a5a4-944ea6cee03a 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24bead92-17c0-455a-a5a4-944ea6cee03a ']' 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.255 [2024-11-21 05:00:50.966305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.255 [2024-11-21 05:00:50.966336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.255 [2024-11-21 05:00:50.966414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.255 [2024-11-21 05:00:50.966505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.255 [2024-11-21 05:00:50.966520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.255 05:00:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 [2024-11-21 05:00:51.130131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:34.516 [2024-11-21 05:00:51.132150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:34.516 [2024-11-21 05:00:51.132215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:34.516 [2024-11-21 05:00:51.132255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:34.516 [2024-11-21 05:00:51.132308] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:34.516 [2024-11-21 05:00:51.132356] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:34.516 [2024-11-21 05:00:51.132385] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:34.516 [2024-11-21 05:00:51.132407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:34.516 [2024-11-21 05:00:51.132427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.516 [2024-11-21 05:00:51.132442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:34.516 request: 00:14:34.516 { 00:14:34.516 "name": "raid_bdev1", 00:14:34.516 "raid_level": "raid5f", 00:14:34.516 "base_bdevs": [ 00:14:34.516 "malloc1", 00:14:34.516 "malloc2", 00:14:34.516 "malloc3", 00:14:34.516 "malloc4" 00:14:34.516 ], 00:14:34.516 "strip_size_kb": 64, 00:14:34.516 "superblock": false, 00:14:34.516 "method": "bdev_raid_create", 00:14:34.516 "req_id": 1 00:14:34.516 } 00:14:34.516 Got JSON-RPC error response 00:14:34.516 response: 00:14:34.516 { 00:14:34.516 "code": -17, 00:14:34.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:34.516 } 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 [2024-11-21 05:00:51.193913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:34.516 [2024-11-21 05:00:51.193962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.516 [2024-11-21 05:00:51.193984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:34.516 [2024-11-21 05:00:51.193993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.516 [2024-11-21 05:00:51.196161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.516 [2024-11-21 05:00:51.196188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:34.516 [2024-11-21 05:00:51.196262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:34.516 [2024-11-21 05:00:51.196312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.516 pt1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.516 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.776 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.776 "name": "raid_bdev1", 00:14:34.776 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:34.776 "strip_size_kb": 64, 00:14:34.776 "state": "configuring", 00:14:34.776 "raid_level": "raid5f", 00:14:34.776 "superblock": true, 00:14:34.776 "num_base_bdevs": 4, 00:14:34.776 "num_base_bdevs_discovered": 1, 00:14:34.776 "num_base_bdevs_operational": 4, 00:14:34.776 "base_bdevs_list": [ 00:14:34.776 { 00:14:34.776 "name": "pt1", 00:14:34.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.776 "is_configured": true, 00:14:34.776 "data_offset": 2048, 00:14:34.776 "data_size": 63488 00:14:34.776 }, 00:14:34.776 { 00:14:34.776 "name": null, 00:14:34.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.776 "is_configured": false, 00:14:34.776 "data_offset": 2048, 00:14:34.776 "data_size": 63488 00:14:34.776 }, 00:14:34.776 { 00:14:34.776 "name": null, 00:14:34.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.776 "is_configured": false, 00:14:34.776 "data_offset": 2048, 00:14:34.776 "data_size": 63488 00:14:34.776 }, 00:14:34.776 { 00:14:34.776 "name": null, 00:14:34.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.776 "is_configured": false, 00:14:34.776 "data_offset": 2048, 00:14:34.776 "data_size": 63488 00:14:34.776 } 00:14:34.776 ] 00:14:34.776 }' 00:14:34.776 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.776 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.036 [2024-11-21 05:00:51.685213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:35.036 [2024-11-21 05:00:51.685278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.036 [2024-11-21 05:00:51.685300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:35.036 [2024-11-21 05:00:51.685309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.036 [2024-11-21 05:00:51.685703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.036 [2024-11-21 05:00:51.685720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:35.036 [2024-11-21 05:00:51.685800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:35.036 [2024-11-21 05:00:51.685823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.036 pt2 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.036 [2024-11-21 05:00:51.693183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.036 "name": "raid_bdev1", 00:14:35.036 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:35.036 "strip_size_kb": 64, 00:14:35.036 "state": "configuring", 00:14:35.036 "raid_level": "raid5f", 00:14:35.036 "superblock": true, 00:14:35.036 "num_base_bdevs": 4, 00:14:35.036 "num_base_bdevs_discovered": 1, 00:14:35.036 "num_base_bdevs_operational": 4, 00:14:35.036 "base_bdevs_list": [ 00:14:35.036 { 00:14:35.036 "name": "pt1", 00:14:35.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.036 "is_configured": true, 00:14:35.036 "data_offset": 2048, 00:14:35.036 "data_size": 63488 00:14:35.036 }, 00:14:35.036 { 00:14:35.036 "name": null, 00:14:35.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.036 "is_configured": false, 00:14:35.036 "data_offset": 0, 00:14:35.036 "data_size": 63488 00:14:35.036 }, 00:14:35.036 { 00:14:35.036 "name": null, 00:14:35.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.036 "is_configured": false, 00:14:35.036 "data_offset": 2048, 00:14:35.036 "data_size": 63488 00:14:35.036 }, 00:14:35.036 { 00:14:35.036 "name": null, 00:14:35.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.036 "is_configured": false, 00:14:35.036 "data_offset": 2048, 00:14:35.036 "data_size": 63488 00:14:35.036 } 00:14:35.036 ] 00:14:35.036 }' 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.036 05:00:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 [2024-11-21 05:00:52.104498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:35.607 [2024-11-21 05:00:52.104576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.607 [2024-11-21 05:00:52.104598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:35.607 [2024-11-21 05:00:52.104610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.607 [2024-11-21 05:00:52.105064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.607 [2024-11-21 05:00:52.105086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:35.607 [2024-11-21 05:00:52.105184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:35.607 [2024-11-21 05:00:52.105213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.607 pt2 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 [2024-11-21 05:00:52.116467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:35.607 [2024-11-21 05:00:52.116524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.607 [2024-11-21 05:00:52.116544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:35.607 [2024-11-21 05:00:52.116556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.607 [2024-11-21 05:00:52.116914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.607 [2024-11-21 05:00:52.116938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:35.607 [2024-11-21 05:00:52.117000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:35.607 [2024-11-21 05:00:52.117022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:35.607 pt3 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 [2024-11-21 05:00:52.128406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:35.607 [2024-11-21 05:00:52.128462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.607 [2024-11-21 05:00:52.128482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:35.607 [2024-11-21 05:00:52.128494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.607 [2024-11-21 05:00:52.128843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.607 [2024-11-21 05:00:52.128862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:35.607 [2024-11-21 05:00:52.128915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:35.607 [2024-11-21 05:00:52.128935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:35.607 [2024-11-21 05:00:52.129031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:35.607 [2024-11-21 05:00:52.129042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:35.607 [2024-11-21 05:00:52.129294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:35.607 [2024-11-21 05:00:52.129805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:35.607 [2024-11-21 05:00:52.129824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:35.607 [2024-11-21 05:00:52.129928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.607 pt4 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.607 "name": "raid_bdev1", 00:14:35.607 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:35.607 "strip_size_kb": 64, 00:14:35.607 "state": "online", 00:14:35.607 "raid_level": "raid5f", 00:14:35.607 "superblock": true, 00:14:35.607 "num_base_bdevs": 4, 00:14:35.607 "num_base_bdevs_discovered": 4, 00:14:35.607 "num_base_bdevs_operational": 4, 00:14:35.607 "base_bdevs_list": [ 00:14:35.607 { 00:14:35.607 "name": "pt1", 00:14:35.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 }, 00:14:35.607 { 00:14:35.607 "name": "pt2", 00:14:35.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 }, 00:14:35.607 { 00:14:35.607 "name": "pt3", 00:14:35.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 }, 00:14:35.607 { 00:14:35.607 "name": "pt4", 00:14:35.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 } 00:14:35.607 ] 00:14:35.607 }' 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.607 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.867 [2024-11-21 05:00:52.563930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.867 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.127 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.128 "name": "raid_bdev1", 00:14:36.128 "aliases": [ 00:14:36.128 "24bead92-17c0-455a-a5a4-944ea6cee03a" 00:14:36.128 ], 00:14:36.128 "product_name": "Raid Volume", 00:14:36.128 "block_size": 512, 00:14:36.128 "num_blocks": 190464, 00:14:36.128 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:36.128 "assigned_rate_limits": { 00:14:36.128 "rw_ios_per_sec": 0, 00:14:36.128 "rw_mbytes_per_sec": 0, 00:14:36.128 "r_mbytes_per_sec": 0, 00:14:36.128 "w_mbytes_per_sec": 0 00:14:36.128 }, 00:14:36.128 "claimed": false, 00:14:36.128 "zoned": false, 00:14:36.128 "supported_io_types": { 00:14:36.128 "read": true, 00:14:36.128 "write": true, 00:14:36.128 "unmap": false, 00:14:36.128 "flush": false, 00:14:36.128 "reset": true, 00:14:36.128 "nvme_admin": false, 00:14:36.128 "nvme_io": false, 00:14:36.128 "nvme_io_md": false, 00:14:36.128 "write_zeroes": true, 00:14:36.128 "zcopy": false, 00:14:36.128 "get_zone_info": false, 00:14:36.128 "zone_management": false, 00:14:36.128 "zone_append": false, 00:14:36.128 "compare": false, 00:14:36.128 "compare_and_write": false, 00:14:36.128 "abort": false, 00:14:36.128 "seek_hole": false, 00:14:36.128 "seek_data": false, 00:14:36.128 "copy": false, 00:14:36.128 "nvme_iov_md": false 00:14:36.128 }, 00:14:36.128 "driver_specific": { 00:14:36.128 "raid": { 00:14:36.128 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:36.128 "strip_size_kb": 64, 00:14:36.128 "state": "online", 00:14:36.128 "raid_level": "raid5f", 00:14:36.128 "superblock": true, 00:14:36.128 "num_base_bdevs": 4, 00:14:36.128 "num_base_bdevs_discovered": 4, 00:14:36.128 "num_base_bdevs_operational": 4, 00:14:36.128 "base_bdevs_list": [ 00:14:36.128 { 00:14:36.128 "name": "pt1", 00:14:36.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.128 "is_configured": true, 00:14:36.128 "data_offset": 2048, 00:14:36.128 "data_size": 63488 00:14:36.128 }, 00:14:36.128 { 00:14:36.128 "name": "pt2", 00:14:36.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.128 "is_configured": true, 00:14:36.128 "data_offset": 2048, 00:14:36.128 "data_size": 63488 00:14:36.128 }, 00:14:36.128 { 00:14:36.128 "name": "pt3", 00:14:36.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.128 "is_configured": true, 00:14:36.128 "data_offset": 2048, 00:14:36.128 "data_size": 63488 00:14:36.128 }, 00:14:36.128 { 00:14:36.128 "name": "pt4", 00:14:36.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.128 "is_configured": true, 00:14:36.128 "data_offset": 2048, 00:14:36.128 "data_size": 63488 00:14:36.128 } 00:14:36.128 ] 00:14:36.128 } 00:14:36.128 } 00:14:36.128 }' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:36.128 pt2 00:14:36.128 pt3 00:14:36.128 pt4' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.128 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 [2024-11-21 05:00:52.895632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24bead92-17c0-455a-a5a4-944ea6cee03a '!=' 24bead92-17c0-455a-a5a4-944ea6cee03a ']' 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 [2024-11-21 05:00:52.931287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.388 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.389 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.389 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.389 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.389 "name": "raid_bdev1", 00:14:36.389 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:36.389 "strip_size_kb": 64, 00:14:36.389 "state": "online", 00:14:36.389 "raid_level": "raid5f", 00:14:36.389 "superblock": true, 00:14:36.389 "num_base_bdevs": 4, 00:14:36.389 "num_base_bdevs_discovered": 3, 00:14:36.389 "num_base_bdevs_operational": 3, 00:14:36.389 "base_bdevs_list": [ 00:14:36.389 { 00:14:36.389 "name": null, 00:14:36.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.389 "is_configured": false, 00:14:36.389 "data_offset": 0, 00:14:36.389 "data_size": 63488 00:14:36.389 }, 00:14:36.389 { 00:14:36.389 "name": "pt2", 00:14:36.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.389 "is_configured": true, 00:14:36.389 "data_offset": 2048, 00:14:36.389 "data_size": 63488 00:14:36.389 }, 00:14:36.389 { 00:14:36.389 "name": "pt3", 00:14:36.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.389 "is_configured": true, 00:14:36.389 "data_offset": 2048, 00:14:36.389 "data_size": 63488 00:14:36.389 }, 00:14:36.389 { 00:14:36.389 "name": "pt4", 00:14:36.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.389 "is_configured": true, 00:14:36.389 "data_offset": 2048, 00:14:36.389 "data_size": 63488 00:14:36.389 } 00:14:36.389 ] 00:14:36.389 }' 00:14:36.389 05:00:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.389 05:00:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.649 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.649 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.649 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.649 [2024-11-21 05:00:53.374547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.649 [2024-11-21 05:00:53.374579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.649 [2024-11-21 05:00:53.374686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.649 [2024-11-21 05:00:53.374767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.649 [2024-11-21 05:00:53.374785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:36.649 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 [2024-11-21 05:00:53.458387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.910 [2024-11-21 05:00:53.458455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.910 [2024-11-21 05:00:53.458475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:36.910 [2024-11-21 05:00:53.458489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.910 [2024-11-21 05:00:53.460972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.910 [2024-11-21 05:00:53.461016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.910 [2024-11-21 05:00:53.461127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:36.910 [2024-11-21 05:00:53.461173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.910 pt2 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.910 "name": "raid_bdev1", 00:14:36.910 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:36.910 "strip_size_kb": 64, 00:14:36.910 "state": "configuring", 00:14:36.910 "raid_level": "raid5f", 00:14:36.910 "superblock": true, 00:14:36.910 "num_base_bdevs": 4, 00:14:36.910 "num_base_bdevs_discovered": 1, 00:14:36.910 "num_base_bdevs_operational": 3, 00:14:36.910 "base_bdevs_list": [ 00:14:36.910 { 00:14:36.910 "name": null, 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.910 "is_configured": false, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": "pt2", 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.910 "is_configured": true, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": null, 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.910 "is_configured": false, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": null, 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.910 "is_configured": false, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 } 00:14:36.910 ] 00:14:36.910 }' 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.910 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.170 [2024-11-21 05:00:53.845900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.170 [2024-11-21 05:00:53.845967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.170 [2024-11-21 05:00:53.845990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:37.170 [2024-11-21 05:00:53.846008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.170 [2024-11-21 05:00:53.846532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.170 [2024-11-21 05:00:53.846560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.170 [2024-11-21 05:00:53.846652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:37.170 [2024-11-21 05:00:53.846701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.170 pt3 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.170 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.171 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.171 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.431 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.431 "name": "raid_bdev1", 00:14:37.431 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:37.431 "strip_size_kb": 64, 00:14:37.431 "state": "configuring", 00:14:37.431 "raid_level": "raid5f", 00:14:37.431 "superblock": true, 00:14:37.431 "num_base_bdevs": 4, 00:14:37.431 "num_base_bdevs_discovered": 2, 00:14:37.431 "num_base_bdevs_operational": 3, 00:14:37.431 "base_bdevs_list": [ 00:14:37.431 { 00:14:37.431 "name": null, 00:14:37.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.431 "is_configured": false, 00:14:37.431 "data_offset": 2048, 00:14:37.431 "data_size": 63488 00:14:37.431 }, 00:14:37.431 { 00:14:37.431 "name": "pt2", 00:14:37.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.431 "is_configured": true, 00:14:37.431 "data_offset": 2048, 00:14:37.431 "data_size": 63488 00:14:37.431 }, 00:14:37.431 { 00:14:37.431 "name": "pt3", 00:14:37.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.431 "is_configured": true, 00:14:37.431 "data_offset": 2048, 00:14:37.431 "data_size": 63488 00:14:37.431 }, 00:14:37.431 { 00:14:37.431 "name": null, 00:14:37.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.431 "is_configured": false, 00:14:37.431 "data_offset": 2048, 00:14:37.431 "data_size": 63488 00:14:37.431 } 00:14:37.431 ] 00:14:37.431 }' 00:14:37.431 05:00:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.431 05:00:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.690 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.690 [2024-11-21 05:00:54.301207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:37.691 [2024-11-21 05:00:54.301288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.691 [2024-11-21 05:00:54.301316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:37.691 [2024-11-21 05:00:54.301331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.691 [2024-11-21 05:00:54.301874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.691 [2024-11-21 05:00:54.301915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:37.691 [2024-11-21 05:00:54.302016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:37.691 [2024-11-21 05:00:54.302051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:37.691 [2024-11-21 05:00:54.302204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:37.691 [2024-11-21 05:00:54.302226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:37.691 [2024-11-21 05:00:54.302506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:37.691 [2024-11-21 05:00:54.303141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:37.691 [2024-11-21 05:00:54.303163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:37.691 [2024-11-21 05:00:54.303448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.691 pt4 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.691 "name": "raid_bdev1", 00:14:37.691 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:37.691 "strip_size_kb": 64, 00:14:37.691 "state": "online", 00:14:37.691 "raid_level": "raid5f", 00:14:37.691 "superblock": true, 00:14:37.691 "num_base_bdevs": 4, 00:14:37.691 "num_base_bdevs_discovered": 3, 00:14:37.691 "num_base_bdevs_operational": 3, 00:14:37.691 "base_bdevs_list": [ 00:14:37.691 { 00:14:37.691 "name": null, 00:14:37.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.691 "is_configured": false, 00:14:37.691 "data_offset": 2048, 00:14:37.691 "data_size": 63488 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": "pt2", 00:14:37.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 2048, 00:14:37.691 "data_size": 63488 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": "pt3", 00:14:37.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 2048, 00:14:37.691 "data_size": 63488 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": "pt4", 00:14:37.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 2048, 00:14:37.691 "data_size": 63488 00:14:37.691 } 00:14:37.691 ] 00:14:37.691 }' 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.691 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.258 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.258 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.258 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.258 [2024-11-21 05:00:54.769338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.259 [2024-11-21 05:00:54.769398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.259 [2024-11-21 05:00:54.769499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.259 [2024-11-21 05:00:54.769590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.259 [2024-11-21 05:00:54.769602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 [2024-11-21 05:00:54.845211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.259 [2024-11-21 05:00:54.845281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.259 [2024-11-21 05:00:54.845307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:38.259 [2024-11-21 05:00:54.845318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.259 [2024-11-21 05:00:54.848110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.259 [2024-11-21 05:00:54.848149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.259 [2024-11-21 05:00:54.848241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:38.259 [2024-11-21 05:00:54.848305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.259 [2024-11-21 05:00:54.848441] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:38.259 [2024-11-21 05:00:54.848460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.259 [2024-11-21 05:00:54.848483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:38.259 [2024-11-21 05:00:54.848534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.259 [2024-11-21 05:00:54.848639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.259 pt1 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.259 "name": "raid_bdev1", 00:14:38.259 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:38.259 "strip_size_kb": 64, 00:14:38.259 "state": "configuring", 00:14:38.259 "raid_level": "raid5f", 00:14:38.259 "superblock": true, 00:14:38.259 "num_base_bdevs": 4, 00:14:38.259 "num_base_bdevs_discovered": 2, 00:14:38.259 "num_base_bdevs_operational": 3, 00:14:38.259 "base_bdevs_list": [ 00:14:38.259 { 00:14:38.259 "name": null, 00:14:38.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.259 "is_configured": false, 00:14:38.259 "data_offset": 2048, 00:14:38.259 "data_size": 63488 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": "pt2", 00:14:38.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.259 "is_configured": true, 00:14:38.259 "data_offset": 2048, 00:14:38.259 "data_size": 63488 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": "pt3", 00:14:38.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.259 "is_configured": true, 00:14:38.259 "data_offset": 2048, 00:14:38.259 "data_size": 63488 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": null, 00:14:38.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.259 "is_configured": false, 00:14:38.259 "data_offset": 2048, 00:14:38.259 "data_size": 63488 00:14:38.259 } 00:14:38.259 ] 00:14:38.259 }' 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.259 05:00:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.828 [2024-11-21 05:00:55.392344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:38.828 [2024-11-21 05:00:55.392451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.828 [2024-11-21 05:00:55.392477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:38.828 [2024-11-21 05:00:55.392492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.828 [2024-11-21 05:00:55.393026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.828 [2024-11-21 05:00:55.393052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:38.828 [2024-11-21 05:00:55.393193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:38.828 [2024-11-21 05:00:55.393231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:38.828 [2024-11-21 05:00:55.393357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:38.828 [2024-11-21 05:00:55.393371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.828 [2024-11-21 05:00:55.393676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:38.828 [2024-11-21 05:00:55.394344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:38.828 [2024-11-21 05:00:55.394368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:38.828 [2024-11-21 05:00:55.394618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.828 pt4 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.828 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.829 "name": "raid_bdev1", 00:14:38.829 "uuid": "24bead92-17c0-455a-a5a4-944ea6cee03a", 00:14:38.829 "strip_size_kb": 64, 00:14:38.829 "state": "online", 00:14:38.829 "raid_level": "raid5f", 00:14:38.829 "superblock": true, 00:14:38.829 "num_base_bdevs": 4, 00:14:38.829 "num_base_bdevs_discovered": 3, 00:14:38.829 "num_base_bdevs_operational": 3, 00:14:38.829 "base_bdevs_list": [ 00:14:38.829 { 00:14:38.829 "name": null, 00:14:38.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.829 "is_configured": false, 00:14:38.829 "data_offset": 2048, 00:14:38.829 "data_size": 63488 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "name": "pt2", 00:14:38.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.829 "is_configured": true, 00:14:38.829 "data_offset": 2048, 00:14:38.829 "data_size": 63488 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "name": "pt3", 00:14:38.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.829 "is_configured": true, 00:14:38.829 "data_offset": 2048, 00:14:38.829 "data_size": 63488 00:14:38.829 }, 00:14:38.829 { 00:14:38.829 "name": "pt4", 00:14:38.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.829 "is_configured": true, 00:14:38.829 "data_offset": 2048, 00:14:38.829 "data_size": 63488 00:14:38.829 } 00:14:38.829 ] 00:14:38.829 }' 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.829 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:39.397 [2024-11-21 05:00:55.856790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24bead92-17c0-455a-a5a4-944ea6cee03a '!=' 24bead92-17c0-455a-a5a4-944ea6cee03a ']' 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94686 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 94686 ']' 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 94686 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94686 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.397 killing process with pid 94686 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94686' 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 94686 00:14:39.397 [2024-11-21 05:00:55.943796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.397 [2024-11-21 05:00:55.943910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.397 05:00:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 94686 00:14:39.397 [2024-11-21 05:00:55.944011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.397 [2024-11-21 05:00:55.944025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:39.397 [2024-11-21 05:00:56.026063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.657 05:00:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:39.657 00:14:39.657 real 0m7.199s 00:14:39.657 user 0m11.788s 00:14:39.657 sys 0m1.686s 00:14:39.657 05:00:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.657 05:00:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.657 ************************************ 00:14:39.657 END TEST raid5f_superblock_test 00:14:39.657 ************************************ 00:14:39.916 05:00:56 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:39.916 05:00:56 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:39.916 05:00:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:39.916 05:00:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.916 05:00:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.916 ************************************ 00:14:39.916 START TEST raid5f_rebuild_test 00:14:39.916 ************************************ 00:14:39.916 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:14:39.916 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95160 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95160 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 95160 ']' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.917 05:00:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:39.917 Zero copy mechanism will not be used. 00:14:39.917 [2024-11-21 05:00:56.526708] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:39.917 [2024-11-21 05:00:56.526843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95160 ] 00:14:40.176 [2024-11-21 05:00:56.699075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.176 [2024-11-21 05:00:56.738068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.176 [2024-11-21 05:00:56.813454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.176 [2024-11-21 05:00:56.813495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 BaseBdev1_malloc 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.744 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 [2024-11-21 05:00:57.379836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:40.745 [2024-11-21 05:00:57.379908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.745 [2024-11-21 05:00:57.379944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.745 [2024-11-21 05:00:57.379958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.745 [2024-11-21 05:00:57.382557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.745 [2024-11-21 05:00:57.382586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.745 BaseBdev1 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 BaseBdev2_malloc 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 [2024-11-21 05:00:57.414213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:40.745 [2024-11-21 05:00:57.414261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.745 [2024-11-21 05:00:57.414282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.745 [2024-11-21 05:00:57.414291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.745 [2024-11-21 05:00:57.416723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.745 [2024-11-21 05:00:57.416751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:40.745 BaseBdev2 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 BaseBdev3_malloc 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 [2024-11-21 05:00:57.448620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:40.745 [2024-11-21 05:00:57.448678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.745 [2024-11-21 05:00:57.448703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:40.745 [2024-11-21 05:00:57.448712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.745 [2024-11-21 05:00:57.451237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.745 [2024-11-21 05:00:57.451265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:40.745 BaseBdev3 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.745 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 BaseBdev4_malloc 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 [2024-11-21 05:00:57.491188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:41.005 [2024-11-21 05:00:57.491234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.005 [2024-11-21 05:00:57.491257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:41.005 [2024-11-21 05:00:57.491266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.005 [2024-11-21 05:00:57.493562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.005 [2024-11-21 05:00:57.493590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:41.005 BaseBdev4 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 spare_malloc 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 spare_delay 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 [2024-11-21 05:00:57.537692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.005 [2024-11-21 05:00:57.537737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.005 [2024-11-21 05:00:57.537760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:41.005 [2024-11-21 05:00:57.537769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.005 [2024-11-21 05:00:57.540005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.005 [2024-11-21 05:00:57.540034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.005 spare 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 [2024-11-21 05:00:57.549733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.005 [2024-11-21 05:00:57.551651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.005 [2024-11-21 05:00:57.551717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.005 [2024-11-21 05:00:57.551754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:41.005 [2024-11-21 05:00:57.551840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:41.005 [2024-11-21 05:00:57.551850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:41.005 [2024-11-21 05:00:57.552099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:41.005 [2024-11-21 05:00:57.552578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:41.005 [2024-11-21 05:00:57.552599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:41.005 [2024-11-21 05:00:57.552709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.005 "name": "raid_bdev1", 00:14:41.005 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:41.005 "strip_size_kb": 64, 00:14:41.005 "state": "online", 00:14:41.005 "raid_level": "raid5f", 00:14:41.005 "superblock": false, 00:14:41.005 "num_base_bdevs": 4, 00:14:41.005 "num_base_bdevs_discovered": 4, 00:14:41.005 "num_base_bdevs_operational": 4, 00:14:41.005 "base_bdevs_list": [ 00:14:41.005 { 00:14:41.005 "name": "BaseBdev1", 00:14:41.005 "uuid": "55906fc6-3a96-5e6d-b69a-5a137996f6a9", 00:14:41.005 "is_configured": true, 00:14:41.005 "data_offset": 0, 00:14:41.005 "data_size": 65536 00:14:41.005 }, 00:14:41.005 { 00:14:41.005 "name": "BaseBdev2", 00:14:41.005 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:41.005 "is_configured": true, 00:14:41.005 "data_offset": 0, 00:14:41.005 "data_size": 65536 00:14:41.005 }, 00:14:41.005 { 00:14:41.005 "name": "BaseBdev3", 00:14:41.005 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:41.005 "is_configured": true, 00:14:41.005 "data_offset": 0, 00:14:41.005 "data_size": 65536 00:14:41.005 }, 00:14:41.005 { 00:14:41.005 "name": "BaseBdev4", 00:14:41.005 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:41.005 "is_configured": true, 00:14:41.005 "data_offset": 0, 00:14:41.005 "data_size": 65536 00:14:41.005 } 00:14:41.005 ] 00:14:41.005 }' 00:14:41.005 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.006 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.573 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.573 05:00:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:41.573 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.573 05:00:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.573 [2024-11-21 05:00:58.006974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.573 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:41.573 [2024-11-21 05:00:58.282399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:41.573 /dev/nbd0 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.833 1+0 records in 00:14:41.833 1+0 records out 00:14:41.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042263 s, 9.7 MB/s 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:41.833 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:42.092 512+0 records in 00:14:42.092 512+0 records out 00:14:42.092 100663296 bytes (101 MB, 96 MiB) copied, 0.424072 s, 237 MB/s 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.092 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.351 [2024-11-21 05:00:58.987168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.351 05:00:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.351 [2024-11-21 05:00:59.003238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.351 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.351 "name": "raid_bdev1", 00:14:42.351 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:42.351 "strip_size_kb": 64, 00:14:42.352 "state": "online", 00:14:42.352 "raid_level": "raid5f", 00:14:42.352 "superblock": false, 00:14:42.352 "num_base_bdevs": 4, 00:14:42.352 "num_base_bdevs_discovered": 3, 00:14:42.352 "num_base_bdevs_operational": 3, 00:14:42.352 "base_bdevs_list": [ 00:14:42.352 { 00:14:42.352 "name": null, 00:14:42.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.352 "is_configured": false, 00:14:42.352 "data_offset": 0, 00:14:42.352 "data_size": 65536 00:14:42.352 }, 00:14:42.352 { 00:14:42.352 "name": "BaseBdev2", 00:14:42.352 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:42.352 "is_configured": true, 00:14:42.352 "data_offset": 0, 00:14:42.352 "data_size": 65536 00:14:42.352 }, 00:14:42.352 { 00:14:42.352 "name": "BaseBdev3", 00:14:42.352 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:42.352 "is_configured": true, 00:14:42.352 "data_offset": 0, 00:14:42.352 "data_size": 65536 00:14:42.352 }, 00:14:42.352 { 00:14:42.352 "name": "BaseBdev4", 00:14:42.352 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:42.352 "is_configured": true, 00:14:42.352 "data_offset": 0, 00:14:42.352 "data_size": 65536 00:14:42.352 } 00:14:42.352 ] 00:14:42.352 }' 00:14:42.352 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.352 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.917 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.917 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.917 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.917 [2024-11-21 05:00:59.386625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.917 [2024-11-21 05:00:59.391060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:42.917 05:00:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.917 05:00:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:42.917 [2024-11-21 05:00:59.393366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.853 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.854 "name": "raid_bdev1", 00:14:43.854 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:43.854 "strip_size_kb": 64, 00:14:43.854 "state": "online", 00:14:43.854 "raid_level": "raid5f", 00:14:43.854 "superblock": false, 00:14:43.854 "num_base_bdevs": 4, 00:14:43.854 "num_base_bdevs_discovered": 4, 00:14:43.854 "num_base_bdevs_operational": 4, 00:14:43.854 "process": { 00:14:43.854 "type": "rebuild", 00:14:43.854 "target": "spare", 00:14:43.854 "progress": { 00:14:43.854 "blocks": 19200, 00:14:43.854 "percent": 9 00:14:43.854 } 00:14:43.854 }, 00:14:43.854 "base_bdevs_list": [ 00:14:43.854 { 00:14:43.854 "name": "spare", 00:14:43.854 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:43.854 "is_configured": true, 00:14:43.854 "data_offset": 0, 00:14:43.854 "data_size": 65536 00:14:43.854 }, 00:14:43.854 { 00:14:43.854 "name": "BaseBdev2", 00:14:43.854 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:43.854 "is_configured": true, 00:14:43.854 "data_offset": 0, 00:14:43.854 "data_size": 65536 00:14:43.854 }, 00:14:43.854 { 00:14:43.854 "name": "BaseBdev3", 00:14:43.854 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:43.854 "is_configured": true, 00:14:43.854 "data_offset": 0, 00:14:43.854 "data_size": 65536 00:14:43.854 }, 00:14:43.854 { 00:14:43.854 "name": "BaseBdev4", 00:14:43.854 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:43.854 "is_configured": true, 00:14:43.854 "data_offset": 0, 00:14:43.854 "data_size": 65536 00:14:43.854 } 00:14:43.854 ] 00:14:43.854 }' 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.854 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.854 [2024-11-21 05:01:00.537927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.114 [2024-11-21 05:01:00.599974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.114 [2024-11-21 05:01:00.600115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.114 [2024-11-21 05:01:00.600215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.114 [2024-11-21 05:01:00.600261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.114 "name": "raid_bdev1", 00:14:44.114 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:44.114 "strip_size_kb": 64, 00:14:44.114 "state": "online", 00:14:44.114 "raid_level": "raid5f", 00:14:44.114 "superblock": false, 00:14:44.114 "num_base_bdevs": 4, 00:14:44.114 "num_base_bdevs_discovered": 3, 00:14:44.114 "num_base_bdevs_operational": 3, 00:14:44.114 "base_bdevs_list": [ 00:14:44.114 { 00:14:44.114 "name": null, 00:14:44.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.114 "is_configured": false, 00:14:44.114 "data_offset": 0, 00:14:44.114 "data_size": 65536 00:14:44.114 }, 00:14:44.114 { 00:14:44.114 "name": "BaseBdev2", 00:14:44.114 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:44.114 "is_configured": true, 00:14:44.114 "data_offset": 0, 00:14:44.114 "data_size": 65536 00:14:44.114 }, 00:14:44.114 { 00:14:44.114 "name": "BaseBdev3", 00:14:44.114 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:44.114 "is_configured": true, 00:14:44.114 "data_offset": 0, 00:14:44.114 "data_size": 65536 00:14:44.114 }, 00:14:44.114 { 00:14:44.114 "name": "BaseBdev4", 00:14:44.114 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:44.114 "is_configured": true, 00:14:44.114 "data_offset": 0, 00:14:44.114 "data_size": 65536 00:14:44.114 } 00:14:44.114 ] 00:14:44.114 }' 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.114 05:01:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.373 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.373 "name": "raid_bdev1", 00:14:44.373 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:44.373 "strip_size_kb": 64, 00:14:44.373 "state": "online", 00:14:44.373 "raid_level": "raid5f", 00:14:44.373 "superblock": false, 00:14:44.373 "num_base_bdevs": 4, 00:14:44.373 "num_base_bdevs_discovered": 3, 00:14:44.373 "num_base_bdevs_operational": 3, 00:14:44.373 "base_bdevs_list": [ 00:14:44.373 { 00:14:44.373 "name": null, 00:14:44.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.373 "is_configured": false, 00:14:44.373 "data_offset": 0, 00:14:44.373 "data_size": 65536 00:14:44.373 }, 00:14:44.373 { 00:14:44.374 "name": "BaseBdev2", 00:14:44.374 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:44.374 "is_configured": true, 00:14:44.374 "data_offset": 0, 00:14:44.374 "data_size": 65536 00:14:44.374 }, 00:14:44.374 { 00:14:44.374 "name": "BaseBdev3", 00:14:44.374 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:44.374 "is_configured": true, 00:14:44.374 "data_offset": 0, 00:14:44.374 "data_size": 65536 00:14:44.374 }, 00:14:44.374 { 00:14:44.374 "name": "BaseBdev4", 00:14:44.374 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:44.374 "is_configured": true, 00:14:44.374 "data_offset": 0, 00:14:44.374 "data_size": 65536 00:14:44.374 } 00:14:44.374 ] 00:14:44.374 }' 00:14:44.374 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.374 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.374 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.632 [2024-11-21 05:01:01.153258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.632 [2024-11-21 05:01:01.157617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.632 05:01:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.632 [2024-11-21 05:01:01.159792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.569 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.569 "name": "raid_bdev1", 00:14:45.569 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:45.569 "strip_size_kb": 64, 00:14:45.569 "state": "online", 00:14:45.569 "raid_level": "raid5f", 00:14:45.569 "superblock": false, 00:14:45.569 "num_base_bdevs": 4, 00:14:45.569 "num_base_bdevs_discovered": 4, 00:14:45.569 "num_base_bdevs_operational": 4, 00:14:45.569 "process": { 00:14:45.569 "type": "rebuild", 00:14:45.569 "target": "spare", 00:14:45.569 "progress": { 00:14:45.569 "blocks": 19200, 00:14:45.569 "percent": 9 00:14:45.569 } 00:14:45.569 }, 00:14:45.569 "base_bdevs_list": [ 00:14:45.569 { 00:14:45.570 "name": "spare", 00:14:45.570 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:45.570 "is_configured": true, 00:14:45.570 "data_offset": 0, 00:14:45.570 "data_size": 65536 00:14:45.570 }, 00:14:45.570 { 00:14:45.570 "name": "BaseBdev2", 00:14:45.570 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:45.570 "is_configured": true, 00:14:45.570 "data_offset": 0, 00:14:45.570 "data_size": 65536 00:14:45.570 }, 00:14:45.570 { 00:14:45.570 "name": "BaseBdev3", 00:14:45.570 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:45.570 "is_configured": true, 00:14:45.570 "data_offset": 0, 00:14:45.570 "data_size": 65536 00:14:45.570 }, 00:14:45.570 { 00:14:45.570 "name": "BaseBdev4", 00:14:45.570 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:45.570 "is_configured": true, 00:14:45.570 "data_offset": 0, 00:14:45.570 "data_size": 65536 00:14:45.570 } 00:14:45.570 ] 00:14:45.570 }' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=514 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.570 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.829 "name": "raid_bdev1", 00:14:45.829 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:45.829 "strip_size_kb": 64, 00:14:45.829 "state": "online", 00:14:45.829 "raid_level": "raid5f", 00:14:45.829 "superblock": false, 00:14:45.829 "num_base_bdevs": 4, 00:14:45.829 "num_base_bdevs_discovered": 4, 00:14:45.829 "num_base_bdevs_operational": 4, 00:14:45.829 "process": { 00:14:45.829 "type": "rebuild", 00:14:45.829 "target": "spare", 00:14:45.829 "progress": { 00:14:45.829 "blocks": 21120, 00:14:45.829 "percent": 10 00:14:45.829 } 00:14:45.829 }, 00:14:45.829 "base_bdevs_list": [ 00:14:45.829 { 00:14:45.829 "name": "spare", 00:14:45.829 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:45.829 "is_configured": true, 00:14:45.829 "data_offset": 0, 00:14:45.829 "data_size": 65536 00:14:45.829 }, 00:14:45.829 { 00:14:45.829 "name": "BaseBdev2", 00:14:45.829 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:45.829 "is_configured": true, 00:14:45.829 "data_offset": 0, 00:14:45.829 "data_size": 65536 00:14:45.829 }, 00:14:45.829 { 00:14:45.829 "name": "BaseBdev3", 00:14:45.829 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:45.829 "is_configured": true, 00:14:45.829 "data_offset": 0, 00:14:45.829 "data_size": 65536 00:14:45.829 }, 00:14:45.829 { 00:14:45.829 "name": "BaseBdev4", 00:14:45.829 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:45.829 "is_configured": true, 00:14:45.829 "data_offset": 0, 00:14:45.829 "data_size": 65536 00:14:45.829 } 00:14:45.829 ] 00:14:45.829 }' 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.829 05:01:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.766 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.766 "name": "raid_bdev1", 00:14:46.766 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:46.766 "strip_size_kb": 64, 00:14:46.766 "state": "online", 00:14:46.766 "raid_level": "raid5f", 00:14:46.766 "superblock": false, 00:14:46.766 "num_base_bdevs": 4, 00:14:46.766 "num_base_bdevs_discovered": 4, 00:14:46.766 "num_base_bdevs_operational": 4, 00:14:46.766 "process": { 00:14:46.766 "type": "rebuild", 00:14:46.766 "target": "spare", 00:14:46.766 "progress": { 00:14:46.766 "blocks": 42240, 00:14:46.766 "percent": 21 00:14:46.766 } 00:14:46.766 }, 00:14:46.766 "base_bdevs_list": [ 00:14:46.766 { 00:14:46.766 "name": "spare", 00:14:46.766 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:46.766 "is_configured": true, 00:14:46.766 "data_offset": 0, 00:14:46.767 "data_size": 65536 00:14:46.767 }, 00:14:46.767 { 00:14:46.767 "name": "BaseBdev2", 00:14:46.767 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:46.767 "is_configured": true, 00:14:46.767 "data_offset": 0, 00:14:46.767 "data_size": 65536 00:14:46.767 }, 00:14:46.767 { 00:14:46.767 "name": "BaseBdev3", 00:14:46.767 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:46.767 "is_configured": true, 00:14:46.767 "data_offset": 0, 00:14:46.767 "data_size": 65536 00:14:46.767 }, 00:14:46.767 { 00:14:46.767 "name": "BaseBdev4", 00:14:46.767 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:46.767 "is_configured": true, 00:14:46.767 "data_offset": 0, 00:14:46.767 "data_size": 65536 00:14:46.767 } 00:14:46.767 ] 00:14:46.767 }' 00:14:46.767 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.026 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.026 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.026 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.026 05:01:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.961 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.961 "name": "raid_bdev1", 00:14:47.961 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:47.961 "strip_size_kb": 64, 00:14:47.961 "state": "online", 00:14:47.961 "raid_level": "raid5f", 00:14:47.961 "superblock": false, 00:14:47.961 "num_base_bdevs": 4, 00:14:47.961 "num_base_bdevs_discovered": 4, 00:14:47.961 "num_base_bdevs_operational": 4, 00:14:47.961 "process": { 00:14:47.961 "type": "rebuild", 00:14:47.961 "target": "spare", 00:14:47.961 "progress": { 00:14:47.961 "blocks": 65280, 00:14:47.961 "percent": 33 00:14:47.961 } 00:14:47.961 }, 00:14:47.961 "base_bdevs_list": [ 00:14:47.961 { 00:14:47.962 "name": "spare", 00:14:47.962 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:47.962 "is_configured": true, 00:14:47.962 "data_offset": 0, 00:14:47.962 "data_size": 65536 00:14:47.962 }, 00:14:47.962 { 00:14:47.962 "name": "BaseBdev2", 00:14:47.962 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:47.962 "is_configured": true, 00:14:47.962 "data_offset": 0, 00:14:47.962 "data_size": 65536 00:14:47.962 }, 00:14:47.962 { 00:14:47.962 "name": "BaseBdev3", 00:14:47.962 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:47.962 "is_configured": true, 00:14:47.962 "data_offset": 0, 00:14:47.962 "data_size": 65536 00:14:47.962 }, 00:14:47.962 { 00:14:47.962 "name": "BaseBdev4", 00:14:47.962 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:47.962 "is_configured": true, 00:14:47.962 "data_offset": 0, 00:14:47.962 "data_size": 65536 00:14:47.962 } 00:14:47.962 ] 00:14:47.962 }' 00:14:47.962 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.962 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.962 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.962 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.962 05:01:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.338 "name": "raid_bdev1", 00:14:49.338 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:49.338 "strip_size_kb": 64, 00:14:49.338 "state": "online", 00:14:49.338 "raid_level": "raid5f", 00:14:49.338 "superblock": false, 00:14:49.338 "num_base_bdevs": 4, 00:14:49.338 "num_base_bdevs_discovered": 4, 00:14:49.338 "num_base_bdevs_operational": 4, 00:14:49.338 "process": { 00:14:49.338 "type": "rebuild", 00:14:49.338 "target": "spare", 00:14:49.338 "progress": { 00:14:49.338 "blocks": 86400, 00:14:49.338 "percent": 43 00:14:49.338 } 00:14:49.338 }, 00:14:49.338 "base_bdevs_list": [ 00:14:49.338 { 00:14:49.338 "name": "spare", 00:14:49.338 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:49.338 "is_configured": true, 00:14:49.338 "data_offset": 0, 00:14:49.338 "data_size": 65536 00:14:49.338 }, 00:14:49.338 { 00:14:49.338 "name": "BaseBdev2", 00:14:49.338 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:49.338 "is_configured": true, 00:14:49.338 "data_offset": 0, 00:14:49.338 "data_size": 65536 00:14:49.338 }, 00:14:49.338 { 00:14:49.338 "name": "BaseBdev3", 00:14:49.338 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:49.338 "is_configured": true, 00:14:49.338 "data_offset": 0, 00:14:49.338 "data_size": 65536 00:14:49.338 }, 00:14:49.338 { 00:14:49.338 "name": "BaseBdev4", 00:14:49.338 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:49.338 "is_configured": true, 00:14:49.338 "data_offset": 0, 00:14:49.338 "data_size": 65536 00:14:49.338 } 00:14:49.338 ] 00:14:49.338 }' 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.338 05:01:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.275 "name": "raid_bdev1", 00:14:50.275 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:50.275 "strip_size_kb": 64, 00:14:50.275 "state": "online", 00:14:50.275 "raid_level": "raid5f", 00:14:50.275 "superblock": false, 00:14:50.275 "num_base_bdevs": 4, 00:14:50.275 "num_base_bdevs_discovered": 4, 00:14:50.275 "num_base_bdevs_operational": 4, 00:14:50.275 "process": { 00:14:50.275 "type": "rebuild", 00:14:50.275 "target": "spare", 00:14:50.275 "progress": { 00:14:50.275 "blocks": 107520, 00:14:50.275 "percent": 54 00:14:50.275 } 00:14:50.275 }, 00:14:50.275 "base_bdevs_list": [ 00:14:50.275 { 00:14:50.275 "name": "spare", 00:14:50.275 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:50.275 "is_configured": true, 00:14:50.275 "data_offset": 0, 00:14:50.275 "data_size": 65536 00:14:50.275 }, 00:14:50.275 { 00:14:50.275 "name": "BaseBdev2", 00:14:50.275 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:50.275 "is_configured": true, 00:14:50.275 "data_offset": 0, 00:14:50.275 "data_size": 65536 00:14:50.275 }, 00:14:50.275 { 00:14:50.275 "name": "BaseBdev3", 00:14:50.275 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:50.275 "is_configured": true, 00:14:50.275 "data_offset": 0, 00:14:50.275 "data_size": 65536 00:14:50.275 }, 00:14:50.275 { 00:14:50.275 "name": "BaseBdev4", 00:14:50.275 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:50.275 "is_configured": true, 00:14:50.275 "data_offset": 0, 00:14:50.275 "data_size": 65536 00:14:50.275 } 00:14:50.275 ] 00:14:50.275 }' 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.275 05:01:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.212 05:01:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 05:01:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.471 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.471 "name": "raid_bdev1", 00:14:51.471 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:51.471 "strip_size_kb": 64, 00:14:51.471 "state": "online", 00:14:51.471 "raid_level": "raid5f", 00:14:51.471 "superblock": false, 00:14:51.471 "num_base_bdevs": 4, 00:14:51.471 "num_base_bdevs_discovered": 4, 00:14:51.471 "num_base_bdevs_operational": 4, 00:14:51.471 "process": { 00:14:51.471 "type": "rebuild", 00:14:51.471 "target": "spare", 00:14:51.471 "progress": { 00:14:51.471 "blocks": 128640, 00:14:51.471 "percent": 65 00:14:51.471 } 00:14:51.471 }, 00:14:51.471 "base_bdevs_list": [ 00:14:51.471 { 00:14:51.471 "name": "spare", 00:14:51.471 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:51.471 "is_configured": true, 00:14:51.471 "data_offset": 0, 00:14:51.471 "data_size": 65536 00:14:51.471 }, 00:14:51.471 { 00:14:51.471 "name": "BaseBdev2", 00:14:51.471 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:51.471 "is_configured": true, 00:14:51.471 "data_offset": 0, 00:14:51.471 "data_size": 65536 00:14:51.471 }, 00:14:51.471 { 00:14:51.471 "name": "BaseBdev3", 00:14:51.471 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:51.471 "is_configured": true, 00:14:51.471 "data_offset": 0, 00:14:51.471 "data_size": 65536 00:14:51.471 }, 00:14:51.471 { 00:14:51.471 "name": "BaseBdev4", 00:14:51.471 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:51.471 "is_configured": true, 00:14:51.471 "data_offset": 0, 00:14:51.471 "data_size": 65536 00:14:51.471 } 00:14:51.471 ] 00:14:51.471 }' 00:14:51.471 05:01:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.471 05:01:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.471 05:01:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.471 05:01:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.471 05:01:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.409 "name": "raid_bdev1", 00:14:52.409 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:52.409 "strip_size_kb": 64, 00:14:52.409 "state": "online", 00:14:52.409 "raid_level": "raid5f", 00:14:52.409 "superblock": false, 00:14:52.409 "num_base_bdevs": 4, 00:14:52.409 "num_base_bdevs_discovered": 4, 00:14:52.409 "num_base_bdevs_operational": 4, 00:14:52.409 "process": { 00:14:52.409 "type": "rebuild", 00:14:52.409 "target": "spare", 00:14:52.409 "progress": { 00:14:52.409 "blocks": 149760, 00:14:52.409 "percent": 76 00:14:52.409 } 00:14:52.409 }, 00:14:52.409 "base_bdevs_list": [ 00:14:52.409 { 00:14:52.409 "name": "spare", 00:14:52.409 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:52.409 "is_configured": true, 00:14:52.409 "data_offset": 0, 00:14:52.409 "data_size": 65536 00:14:52.409 }, 00:14:52.409 { 00:14:52.409 "name": "BaseBdev2", 00:14:52.409 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:52.409 "is_configured": true, 00:14:52.409 "data_offset": 0, 00:14:52.409 "data_size": 65536 00:14:52.409 }, 00:14:52.409 { 00:14:52.409 "name": "BaseBdev3", 00:14:52.409 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:52.409 "is_configured": true, 00:14:52.409 "data_offset": 0, 00:14:52.409 "data_size": 65536 00:14:52.409 }, 00:14:52.409 { 00:14:52.409 "name": "BaseBdev4", 00:14:52.409 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:52.409 "is_configured": true, 00:14:52.409 "data_offset": 0, 00:14:52.409 "data_size": 65536 00:14:52.409 } 00:14:52.409 ] 00:14:52.409 }' 00:14:52.409 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.671 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.671 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.671 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.671 05:01:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.623 "name": "raid_bdev1", 00:14:53.623 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:53.623 "strip_size_kb": 64, 00:14:53.623 "state": "online", 00:14:53.623 "raid_level": "raid5f", 00:14:53.623 "superblock": false, 00:14:53.623 "num_base_bdevs": 4, 00:14:53.623 "num_base_bdevs_discovered": 4, 00:14:53.623 "num_base_bdevs_operational": 4, 00:14:53.623 "process": { 00:14:53.623 "type": "rebuild", 00:14:53.623 "target": "spare", 00:14:53.623 "progress": { 00:14:53.623 "blocks": 172800, 00:14:53.623 "percent": 87 00:14:53.623 } 00:14:53.623 }, 00:14:53.623 "base_bdevs_list": [ 00:14:53.623 { 00:14:53.623 "name": "spare", 00:14:53.623 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 0, 00:14:53.623 "data_size": 65536 00:14:53.623 }, 00:14:53.623 { 00:14:53.623 "name": "BaseBdev2", 00:14:53.623 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 0, 00:14:53.623 "data_size": 65536 00:14:53.623 }, 00:14:53.623 { 00:14:53.623 "name": "BaseBdev3", 00:14:53.623 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 0, 00:14:53.623 "data_size": 65536 00:14:53.623 }, 00:14:53.623 { 00:14:53.623 "name": "BaseBdev4", 00:14:53.623 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 0, 00:14:53.623 "data_size": 65536 00:14:53.623 } 00:14:53.623 ] 00:14:53.623 }' 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.623 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.882 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.882 05:01:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.823 "name": "raid_bdev1", 00:14:54.823 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:54.823 "strip_size_kb": 64, 00:14:54.823 "state": "online", 00:14:54.823 "raid_level": "raid5f", 00:14:54.823 "superblock": false, 00:14:54.823 "num_base_bdevs": 4, 00:14:54.823 "num_base_bdevs_discovered": 4, 00:14:54.823 "num_base_bdevs_operational": 4, 00:14:54.823 "process": { 00:14:54.823 "type": "rebuild", 00:14:54.823 "target": "spare", 00:14:54.823 "progress": { 00:14:54.823 "blocks": 193920, 00:14:54.823 "percent": 98 00:14:54.823 } 00:14:54.823 }, 00:14:54.823 "base_bdevs_list": [ 00:14:54.823 { 00:14:54.823 "name": "spare", 00:14:54.823 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:54.823 "is_configured": true, 00:14:54.823 "data_offset": 0, 00:14:54.823 "data_size": 65536 00:14:54.823 }, 00:14:54.823 { 00:14:54.823 "name": "BaseBdev2", 00:14:54.823 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:54.823 "is_configured": true, 00:14:54.823 "data_offset": 0, 00:14:54.823 "data_size": 65536 00:14:54.823 }, 00:14:54.823 { 00:14:54.823 "name": "BaseBdev3", 00:14:54.823 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:54.823 "is_configured": true, 00:14:54.823 "data_offset": 0, 00:14:54.823 "data_size": 65536 00:14:54.823 }, 00:14:54.823 { 00:14:54.823 "name": "BaseBdev4", 00:14:54.823 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:54.823 "is_configured": true, 00:14:54.823 "data_offset": 0, 00:14:54.823 "data_size": 65536 00:14:54.823 } 00:14:54.823 ] 00:14:54.823 }' 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.823 [2024-11-21 05:01:11.511774] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:54.823 [2024-11-21 05:01:11.511924] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:54.823 [2024-11-21 05:01:11.512008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.823 05:01:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.204 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.204 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.204 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.204 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.205 "name": "raid_bdev1", 00:14:56.205 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:56.205 "strip_size_kb": 64, 00:14:56.205 "state": "online", 00:14:56.205 "raid_level": "raid5f", 00:14:56.205 "superblock": false, 00:14:56.205 "num_base_bdevs": 4, 00:14:56.205 "num_base_bdevs_discovered": 4, 00:14:56.205 "num_base_bdevs_operational": 4, 00:14:56.205 "base_bdevs_list": [ 00:14:56.205 { 00:14:56.205 "name": "spare", 00:14:56.205 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev2", 00:14:56.205 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev3", 00:14:56.205 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev4", 00:14:56.205 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 } 00:14:56.205 ] 00:14:56.205 }' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.205 "name": "raid_bdev1", 00:14:56.205 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:56.205 "strip_size_kb": 64, 00:14:56.205 "state": "online", 00:14:56.205 "raid_level": "raid5f", 00:14:56.205 "superblock": false, 00:14:56.205 "num_base_bdevs": 4, 00:14:56.205 "num_base_bdevs_discovered": 4, 00:14:56.205 "num_base_bdevs_operational": 4, 00:14:56.205 "base_bdevs_list": [ 00:14:56.205 { 00:14:56.205 "name": "spare", 00:14:56.205 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev2", 00:14:56.205 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev3", 00:14:56.205 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev4", 00:14:56.205 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 } 00:14:56.205 ] 00:14:56.205 }' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.205 "name": "raid_bdev1", 00:14:56.205 "uuid": "06664590-1af9-43e1-95d8-2d030bcd4917", 00:14:56.205 "strip_size_kb": 64, 00:14:56.205 "state": "online", 00:14:56.205 "raid_level": "raid5f", 00:14:56.205 "superblock": false, 00:14:56.205 "num_base_bdevs": 4, 00:14:56.205 "num_base_bdevs_discovered": 4, 00:14:56.205 "num_base_bdevs_operational": 4, 00:14:56.205 "base_bdevs_list": [ 00:14:56.205 { 00:14:56.205 "name": "spare", 00:14:56.205 "uuid": "cb3d91b4-71d0-59d6-a9f4-2653476e2b6b", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev2", 00:14:56.205 "uuid": "39a21e73-ee16-510d-8aa2-cad3d8138f1c", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev3", 00:14:56.205 "uuid": "6374b612-4f41-5e65-a644-cd728cc99cd1", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 }, 00:14:56.205 { 00:14:56.205 "name": "BaseBdev4", 00:14:56.205 "uuid": "7b67b6bf-f1c9-5bbb-a8ca-3f127f3fb678", 00:14:56.205 "is_configured": true, 00:14:56.205 "data_offset": 0, 00:14:56.205 "data_size": 65536 00:14:56.205 } 00:14:56.205 ] 00:14:56.205 }' 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.205 05:01:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.465 [2024-11-21 05:01:13.158770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.465 [2024-11-21 05:01:13.158846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.465 [2024-11-21 05:01:13.158982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.465 [2024-11-21 05:01:13.159133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.465 [2024-11-21 05:01:13.159222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.465 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.725 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:56.725 /dev/nbd0 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.985 1+0 records in 00:14:56.985 1+0 records out 00:14:56.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572413 s, 7.2 MB/s 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.985 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:57.246 /dev/nbd1 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.246 1+0 records in 00:14:57.246 1+0 records out 00:14:57.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054508 s, 7.5 MB/s 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.246 05:01:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.506 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95160 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 95160 ']' 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 95160 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95160 00:14:57.766 killing process with pid 95160 00:14:57.766 Received shutdown signal, test time was about 60.000000 seconds 00:14:57.766 00:14:57.766 Latency(us) 00:14:57.766 [2024-11-21T05:01:14.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.766 [2024-11-21T05:01:14.501Z] =================================================================================================================== 00:14:57.766 [2024-11-21T05:01:14.501Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.766 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95160' 00:14:57.767 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 95160 00:14:57.767 [2024-11-21 05:01:14.342899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.767 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 95160 00:14:57.767 [2024-11-21 05:01:14.394431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.028 00:14:58.028 real 0m18.163s 00:14:58.028 user 0m21.684s 00:14:58.028 sys 0m2.370s 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.028 ************************************ 00:14:58.028 END TEST raid5f_rebuild_test 00:14:58.028 ************************************ 00:14:58.028 05:01:14 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:58.028 05:01:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:58.028 05:01:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.028 05:01:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.028 ************************************ 00:14:58.028 START TEST raid5f_rebuild_test_sb 00:14:58.028 ************************************ 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95671 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95671 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95671 ']' 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.028 05:01:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.288 [2024-11-21 05:01:14.787451] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:58.288 [2024-11-21 05:01:14.787677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95671 ] 00:14:58.288 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.288 Zero copy mechanism will not be used. 00:14:58.288 [2024-11-21 05:01:14.959905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.288 [2024-11-21 05:01:14.986555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.547 [2024-11-21 05:01:15.030084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.547 [2024-11-21 05:01:15.030203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 BaseBdev1_malloc 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 [2024-11-21 05:01:15.648471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:59.120 [2024-11-21 05:01:15.648606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.120 [2024-11-21 05:01:15.648649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.120 [2024-11-21 05:01:15.648674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.120 [2024-11-21 05:01:15.650920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.120 [2024-11-21 05:01:15.650960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.120 BaseBdev1 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 BaseBdev2_malloc 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 [2024-11-21 05:01:15.677272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:59.120 [2024-11-21 05:01:15.677322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.120 [2024-11-21 05:01:15.677341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.120 [2024-11-21 05:01:15.677350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.120 [2024-11-21 05:01:15.679409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.120 [2024-11-21 05:01:15.679533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.120 BaseBdev2 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 BaseBdev3_malloc 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 [2024-11-21 05:01:15.706070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:59.121 [2024-11-21 05:01:15.706146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.121 [2024-11-21 05:01:15.706170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.121 [2024-11-21 05:01:15.706179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.121 [2024-11-21 05:01:15.708220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.121 [2024-11-21 05:01:15.708253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:59.121 BaseBdev3 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 BaseBdev4_malloc 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 [2024-11-21 05:01:15.744836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:59.121 [2024-11-21 05:01:15.744924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.121 [2024-11-21 05:01:15.744967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:59.121 [2024-11-21 05:01:15.744976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.121 [2024-11-21 05:01:15.747004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.121 [2024-11-21 05:01:15.747039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:59.121 BaseBdev4 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 spare_malloc 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 spare_delay 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 [2024-11-21 05:01:15.785641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.121 [2024-11-21 05:01:15.785689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.121 [2024-11-21 05:01:15.785710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:59.121 [2024-11-21 05:01:15.785718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.121 [2024-11-21 05:01:15.787826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.121 [2024-11-21 05:01:15.787914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.121 spare 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 [2024-11-21 05:01:15.797700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.121 [2024-11-21 05:01:15.799514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.121 [2024-11-21 05:01:15.799575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.121 [2024-11-21 05:01:15.799614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.121 [2024-11-21 05:01:15.799792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:59.121 [2024-11-21 05:01:15.799807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:59.121 [2024-11-21 05:01:15.800034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:59.121 [2024-11-21 05:01:15.800514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:59.121 [2024-11-21 05:01:15.800541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:59.121 [2024-11-21 05:01:15.800653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.121 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.382 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.382 "name": "raid_bdev1", 00:14:59.382 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:14:59.382 "strip_size_kb": 64, 00:14:59.382 "state": "online", 00:14:59.382 "raid_level": "raid5f", 00:14:59.382 "superblock": true, 00:14:59.382 "num_base_bdevs": 4, 00:14:59.382 "num_base_bdevs_discovered": 4, 00:14:59.382 "num_base_bdevs_operational": 4, 00:14:59.382 "base_bdevs_list": [ 00:14:59.382 { 00:14:59.382 "name": "BaseBdev1", 00:14:59.382 "uuid": "d3603928-41d0-54bd-959c-c7cee4b39a35", 00:14:59.382 "is_configured": true, 00:14:59.382 "data_offset": 2048, 00:14:59.382 "data_size": 63488 00:14:59.382 }, 00:14:59.382 { 00:14:59.382 "name": "BaseBdev2", 00:14:59.382 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:14:59.382 "is_configured": true, 00:14:59.382 "data_offset": 2048, 00:14:59.382 "data_size": 63488 00:14:59.382 }, 00:14:59.382 { 00:14:59.382 "name": "BaseBdev3", 00:14:59.382 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:14:59.382 "is_configured": true, 00:14:59.382 "data_offset": 2048, 00:14:59.382 "data_size": 63488 00:14:59.382 }, 00:14:59.382 { 00:14:59.382 "name": "BaseBdev4", 00:14:59.382 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:14:59.382 "is_configured": true, 00:14:59.382 "data_offset": 2048, 00:14:59.382 "data_size": 63488 00:14:59.382 } 00:14:59.382 ] 00:14:59.382 }' 00:14:59.382 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.382 05:01:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.642 [2024-11-21 05:01:16.270466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.642 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:59.902 [2024-11-21 05:01:16.533881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:59.902 /dev/nbd0 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.902 1+0 records in 00:14:59.902 1+0 records out 00:14:59.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514945 s, 8.0 MB/s 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:59.902 05:01:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:00.842 496+0 records in 00:15:00.842 496+0 records out 00:15:00.842 97517568 bytes (98 MB, 93 MiB) copied, 0.704554 s, 138 MB/s 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.842 [2024-11-21 05:01:17.541038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.842 [2024-11-21 05:01:17.569053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.842 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.102 "name": "raid_bdev1", 00:15:01.102 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:01.102 "strip_size_kb": 64, 00:15:01.102 "state": "online", 00:15:01.102 "raid_level": "raid5f", 00:15:01.102 "superblock": true, 00:15:01.102 "num_base_bdevs": 4, 00:15:01.102 "num_base_bdevs_discovered": 3, 00:15:01.102 "num_base_bdevs_operational": 3, 00:15:01.102 "base_bdevs_list": [ 00:15:01.102 { 00:15:01.102 "name": null, 00:15:01.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.102 "is_configured": false, 00:15:01.102 "data_offset": 0, 00:15:01.102 "data_size": 63488 00:15:01.102 }, 00:15:01.102 { 00:15:01.102 "name": "BaseBdev2", 00:15:01.102 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:01.102 "is_configured": true, 00:15:01.102 "data_offset": 2048, 00:15:01.102 "data_size": 63488 00:15:01.102 }, 00:15:01.102 { 00:15:01.102 "name": "BaseBdev3", 00:15:01.102 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:01.102 "is_configured": true, 00:15:01.102 "data_offset": 2048, 00:15:01.102 "data_size": 63488 00:15:01.102 }, 00:15:01.102 { 00:15:01.102 "name": "BaseBdev4", 00:15:01.102 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:01.102 "is_configured": true, 00:15:01.102 "data_offset": 2048, 00:15:01.102 "data_size": 63488 00:15:01.102 } 00:15:01.102 ] 00:15:01.102 }' 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.102 05:01:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.362 05:01:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.362 05:01:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.362 05:01:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.362 [2024-11-21 05:01:18.032317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.362 [2024-11-21 05:01:18.036644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:01.362 05:01:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.362 05:01:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.362 [2024-11-21 05:01:18.038827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.742 "name": "raid_bdev1", 00:15:02.742 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:02.742 "strip_size_kb": 64, 00:15:02.742 "state": "online", 00:15:02.742 "raid_level": "raid5f", 00:15:02.742 "superblock": true, 00:15:02.742 "num_base_bdevs": 4, 00:15:02.742 "num_base_bdevs_discovered": 4, 00:15:02.742 "num_base_bdevs_operational": 4, 00:15:02.742 "process": { 00:15:02.742 "type": "rebuild", 00:15:02.742 "target": "spare", 00:15:02.742 "progress": { 00:15:02.742 "blocks": 19200, 00:15:02.742 "percent": 10 00:15:02.742 } 00:15:02.742 }, 00:15:02.742 "base_bdevs_list": [ 00:15:02.742 { 00:15:02.742 "name": "spare", 00:15:02.742 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev2", 00:15:02.742 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev3", 00:15:02.742 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev4", 00:15:02.742 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 } 00:15:02.742 ] 00:15:02.742 }' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 [2024-11-21 05:01:19.175530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.742 [2024-11-21 05:01:19.244667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.742 [2024-11-21 05:01:19.244753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.742 [2024-11-21 05:01:19.244786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.742 [2024-11-21 05:01:19.244795] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.742 "name": "raid_bdev1", 00:15:02.742 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:02.742 "strip_size_kb": 64, 00:15:02.742 "state": "online", 00:15:02.742 "raid_level": "raid5f", 00:15:02.742 "superblock": true, 00:15:02.742 "num_base_bdevs": 4, 00:15:02.742 "num_base_bdevs_discovered": 3, 00:15:02.742 "num_base_bdevs_operational": 3, 00:15:02.742 "base_bdevs_list": [ 00:15:02.742 { 00:15:02.742 "name": null, 00:15:02.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.742 "is_configured": false, 00:15:02.742 "data_offset": 0, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev2", 00:15:02.742 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev3", 00:15:02.742 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 }, 00:15:02.742 { 00:15:02.742 "name": "BaseBdev4", 00:15:02.742 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:02.742 "is_configured": true, 00:15:02.742 "data_offset": 2048, 00:15:02.742 "data_size": 63488 00:15:02.742 } 00:15:02.742 ] 00:15:02.742 }' 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.742 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.002 "name": "raid_bdev1", 00:15:03.002 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:03.002 "strip_size_kb": 64, 00:15:03.002 "state": "online", 00:15:03.002 "raid_level": "raid5f", 00:15:03.002 "superblock": true, 00:15:03.002 "num_base_bdevs": 4, 00:15:03.002 "num_base_bdevs_discovered": 3, 00:15:03.002 "num_base_bdevs_operational": 3, 00:15:03.002 "base_bdevs_list": [ 00:15:03.002 { 00:15:03.002 "name": null, 00:15:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.002 "is_configured": false, 00:15:03.002 "data_offset": 0, 00:15:03.002 "data_size": 63488 00:15:03.002 }, 00:15:03.002 { 00:15:03.002 "name": "BaseBdev2", 00:15:03.002 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:03.002 "is_configured": true, 00:15:03.002 "data_offset": 2048, 00:15:03.002 "data_size": 63488 00:15:03.002 }, 00:15:03.002 { 00:15:03.002 "name": "BaseBdev3", 00:15:03.002 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:03.002 "is_configured": true, 00:15:03.002 "data_offset": 2048, 00:15:03.002 "data_size": 63488 00:15:03.002 }, 00:15:03.002 { 00:15:03.002 "name": "BaseBdev4", 00:15:03.002 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:03.002 "is_configured": true, 00:15:03.002 "data_offset": 2048, 00:15:03.002 "data_size": 63488 00:15:03.002 } 00:15:03.002 ] 00:15:03.002 }' 00:15:03.002 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.263 [2024-11-21 05:01:19.809520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.263 [2024-11-21 05:01:19.812818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:03.263 [2024-11-21 05:01:19.814918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.263 05:01:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.201 "name": "raid_bdev1", 00:15:04.201 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:04.201 "strip_size_kb": 64, 00:15:04.201 "state": "online", 00:15:04.201 "raid_level": "raid5f", 00:15:04.201 "superblock": true, 00:15:04.201 "num_base_bdevs": 4, 00:15:04.201 "num_base_bdevs_discovered": 4, 00:15:04.201 "num_base_bdevs_operational": 4, 00:15:04.201 "process": { 00:15:04.201 "type": "rebuild", 00:15:04.201 "target": "spare", 00:15:04.201 "progress": { 00:15:04.201 "blocks": 19200, 00:15:04.201 "percent": 10 00:15:04.201 } 00:15:04.201 }, 00:15:04.201 "base_bdevs_list": [ 00:15:04.201 { 00:15:04.201 "name": "spare", 00:15:04.201 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:04.201 "is_configured": true, 00:15:04.201 "data_offset": 2048, 00:15:04.201 "data_size": 63488 00:15:04.201 }, 00:15:04.201 { 00:15:04.201 "name": "BaseBdev2", 00:15:04.201 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:04.201 "is_configured": true, 00:15:04.201 "data_offset": 2048, 00:15:04.201 "data_size": 63488 00:15:04.201 }, 00:15:04.201 { 00:15:04.201 "name": "BaseBdev3", 00:15:04.201 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:04.201 "is_configured": true, 00:15:04.201 "data_offset": 2048, 00:15:04.201 "data_size": 63488 00:15:04.201 }, 00:15:04.201 { 00:15:04.201 "name": "BaseBdev4", 00:15:04.201 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:04.201 "is_configured": true, 00:15:04.201 "data_offset": 2048, 00:15:04.201 "data_size": 63488 00:15:04.201 } 00:15:04.201 ] 00:15:04.201 }' 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.201 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:04.461 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.461 05:01:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.461 "name": "raid_bdev1", 00:15:04.461 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:04.461 "strip_size_kb": 64, 00:15:04.461 "state": "online", 00:15:04.461 "raid_level": "raid5f", 00:15:04.461 "superblock": true, 00:15:04.461 "num_base_bdevs": 4, 00:15:04.461 "num_base_bdevs_discovered": 4, 00:15:04.461 "num_base_bdevs_operational": 4, 00:15:04.461 "process": { 00:15:04.461 "type": "rebuild", 00:15:04.461 "target": "spare", 00:15:04.461 "progress": { 00:15:04.461 "blocks": 21120, 00:15:04.461 "percent": 11 00:15:04.461 } 00:15:04.461 }, 00:15:04.461 "base_bdevs_list": [ 00:15:04.461 { 00:15:04.461 "name": "spare", 00:15:04.461 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:04.461 "is_configured": true, 00:15:04.461 "data_offset": 2048, 00:15:04.461 "data_size": 63488 00:15:04.461 }, 00:15:04.461 { 00:15:04.461 "name": "BaseBdev2", 00:15:04.461 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:04.461 "is_configured": true, 00:15:04.461 "data_offset": 2048, 00:15:04.461 "data_size": 63488 00:15:04.461 }, 00:15:04.461 { 00:15:04.461 "name": "BaseBdev3", 00:15:04.461 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:04.461 "is_configured": true, 00:15:04.461 "data_offset": 2048, 00:15:04.461 "data_size": 63488 00:15:04.461 }, 00:15:04.461 { 00:15:04.461 "name": "BaseBdev4", 00:15:04.461 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:04.461 "is_configured": true, 00:15:04.461 "data_offset": 2048, 00:15:04.461 "data_size": 63488 00:15:04.461 } 00:15:04.461 ] 00:15:04.461 }' 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.461 05:01:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.400 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.659 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.659 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.659 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.659 "name": "raid_bdev1", 00:15:05.659 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:05.659 "strip_size_kb": 64, 00:15:05.659 "state": "online", 00:15:05.659 "raid_level": "raid5f", 00:15:05.659 "superblock": true, 00:15:05.659 "num_base_bdevs": 4, 00:15:05.659 "num_base_bdevs_discovered": 4, 00:15:05.659 "num_base_bdevs_operational": 4, 00:15:05.659 "process": { 00:15:05.659 "type": "rebuild", 00:15:05.659 "target": "spare", 00:15:05.659 "progress": { 00:15:05.659 "blocks": 44160, 00:15:05.659 "percent": 23 00:15:05.659 } 00:15:05.659 }, 00:15:05.659 "base_bdevs_list": [ 00:15:05.659 { 00:15:05.659 "name": "spare", 00:15:05.659 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:05.660 "is_configured": true, 00:15:05.660 "data_offset": 2048, 00:15:05.660 "data_size": 63488 00:15:05.660 }, 00:15:05.660 { 00:15:05.660 "name": "BaseBdev2", 00:15:05.660 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:05.660 "is_configured": true, 00:15:05.660 "data_offset": 2048, 00:15:05.660 "data_size": 63488 00:15:05.660 }, 00:15:05.660 { 00:15:05.660 "name": "BaseBdev3", 00:15:05.660 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:05.660 "is_configured": true, 00:15:05.660 "data_offset": 2048, 00:15:05.660 "data_size": 63488 00:15:05.660 }, 00:15:05.660 { 00:15:05.660 "name": "BaseBdev4", 00:15:05.660 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:05.660 "is_configured": true, 00:15:05.660 "data_offset": 2048, 00:15:05.660 "data_size": 63488 00:15:05.660 } 00:15:05.660 ] 00:15:05.660 }' 00:15:05.660 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.660 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.660 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.660 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.660 05:01:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.596 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.597 "name": "raid_bdev1", 00:15:06.597 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:06.597 "strip_size_kb": 64, 00:15:06.597 "state": "online", 00:15:06.597 "raid_level": "raid5f", 00:15:06.597 "superblock": true, 00:15:06.597 "num_base_bdevs": 4, 00:15:06.597 "num_base_bdevs_discovered": 4, 00:15:06.597 "num_base_bdevs_operational": 4, 00:15:06.597 "process": { 00:15:06.597 "type": "rebuild", 00:15:06.597 "target": "spare", 00:15:06.597 "progress": { 00:15:06.597 "blocks": 65280, 00:15:06.597 "percent": 34 00:15:06.597 } 00:15:06.597 }, 00:15:06.597 "base_bdevs_list": [ 00:15:06.597 { 00:15:06.597 "name": "spare", 00:15:06.597 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:06.597 "is_configured": true, 00:15:06.597 "data_offset": 2048, 00:15:06.597 "data_size": 63488 00:15:06.597 }, 00:15:06.597 { 00:15:06.597 "name": "BaseBdev2", 00:15:06.597 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:06.597 "is_configured": true, 00:15:06.597 "data_offset": 2048, 00:15:06.597 "data_size": 63488 00:15:06.597 }, 00:15:06.597 { 00:15:06.597 "name": "BaseBdev3", 00:15:06.597 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:06.597 "is_configured": true, 00:15:06.597 "data_offset": 2048, 00:15:06.597 "data_size": 63488 00:15:06.597 }, 00:15:06.597 { 00:15:06.597 "name": "BaseBdev4", 00:15:06.597 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:06.597 "is_configured": true, 00:15:06.597 "data_offset": 2048, 00:15:06.597 "data_size": 63488 00:15:06.597 } 00:15:06.597 ] 00:15:06.597 }' 00:15:06.597 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.856 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.856 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.856 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.856 05:01:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.793 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.793 "name": "raid_bdev1", 00:15:07.793 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:07.793 "strip_size_kb": 64, 00:15:07.793 "state": "online", 00:15:07.793 "raid_level": "raid5f", 00:15:07.793 "superblock": true, 00:15:07.793 "num_base_bdevs": 4, 00:15:07.793 "num_base_bdevs_discovered": 4, 00:15:07.793 "num_base_bdevs_operational": 4, 00:15:07.793 "process": { 00:15:07.793 "type": "rebuild", 00:15:07.793 "target": "spare", 00:15:07.793 "progress": { 00:15:07.793 "blocks": 86400, 00:15:07.793 "percent": 45 00:15:07.793 } 00:15:07.793 }, 00:15:07.793 "base_bdevs_list": [ 00:15:07.793 { 00:15:07.793 "name": "spare", 00:15:07.793 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:07.793 "is_configured": true, 00:15:07.793 "data_offset": 2048, 00:15:07.793 "data_size": 63488 00:15:07.793 }, 00:15:07.793 { 00:15:07.793 "name": "BaseBdev2", 00:15:07.793 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:07.793 "is_configured": true, 00:15:07.793 "data_offset": 2048, 00:15:07.794 "data_size": 63488 00:15:07.794 }, 00:15:07.794 { 00:15:07.794 "name": "BaseBdev3", 00:15:07.794 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:07.794 "is_configured": true, 00:15:07.794 "data_offset": 2048, 00:15:07.794 "data_size": 63488 00:15:07.794 }, 00:15:07.794 { 00:15:07.794 "name": "BaseBdev4", 00:15:07.794 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:07.794 "is_configured": true, 00:15:07.794 "data_offset": 2048, 00:15:07.794 "data_size": 63488 00:15:07.794 } 00:15:07.794 ] 00:15:07.794 }' 00:15:07.794 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.794 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.794 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.053 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.053 05:01:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.991 "name": "raid_bdev1", 00:15:08.991 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:08.991 "strip_size_kb": 64, 00:15:08.991 "state": "online", 00:15:08.991 "raid_level": "raid5f", 00:15:08.991 "superblock": true, 00:15:08.991 "num_base_bdevs": 4, 00:15:08.991 "num_base_bdevs_discovered": 4, 00:15:08.991 "num_base_bdevs_operational": 4, 00:15:08.991 "process": { 00:15:08.991 "type": "rebuild", 00:15:08.991 "target": "spare", 00:15:08.991 "progress": { 00:15:08.991 "blocks": 109440, 00:15:08.991 "percent": 57 00:15:08.991 } 00:15:08.991 }, 00:15:08.991 "base_bdevs_list": [ 00:15:08.991 { 00:15:08.991 "name": "spare", 00:15:08.991 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:08.991 "is_configured": true, 00:15:08.991 "data_offset": 2048, 00:15:08.991 "data_size": 63488 00:15:08.991 }, 00:15:08.991 { 00:15:08.991 "name": "BaseBdev2", 00:15:08.991 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:08.991 "is_configured": true, 00:15:08.991 "data_offset": 2048, 00:15:08.991 "data_size": 63488 00:15:08.991 }, 00:15:08.991 { 00:15:08.991 "name": "BaseBdev3", 00:15:08.991 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:08.991 "is_configured": true, 00:15:08.991 "data_offset": 2048, 00:15:08.991 "data_size": 63488 00:15:08.991 }, 00:15:08.991 { 00:15:08.991 "name": "BaseBdev4", 00:15:08.991 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:08.991 "is_configured": true, 00:15:08.991 "data_offset": 2048, 00:15:08.991 "data_size": 63488 00:15:08.991 } 00:15:08.991 ] 00:15:08.991 }' 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.991 05:01:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.370 "name": "raid_bdev1", 00:15:10.370 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:10.370 "strip_size_kb": 64, 00:15:10.370 "state": "online", 00:15:10.370 "raid_level": "raid5f", 00:15:10.370 "superblock": true, 00:15:10.370 "num_base_bdevs": 4, 00:15:10.370 "num_base_bdevs_discovered": 4, 00:15:10.370 "num_base_bdevs_operational": 4, 00:15:10.370 "process": { 00:15:10.370 "type": "rebuild", 00:15:10.370 "target": "spare", 00:15:10.371 "progress": { 00:15:10.371 "blocks": 130560, 00:15:10.371 "percent": 68 00:15:10.371 } 00:15:10.371 }, 00:15:10.371 "base_bdevs_list": [ 00:15:10.371 { 00:15:10.371 "name": "spare", 00:15:10.371 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:10.371 "is_configured": true, 00:15:10.371 "data_offset": 2048, 00:15:10.371 "data_size": 63488 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "name": "BaseBdev2", 00:15:10.371 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:10.371 "is_configured": true, 00:15:10.371 "data_offset": 2048, 00:15:10.371 "data_size": 63488 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "name": "BaseBdev3", 00:15:10.371 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:10.371 "is_configured": true, 00:15:10.371 "data_offset": 2048, 00:15:10.371 "data_size": 63488 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "name": "BaseBdev4", 00:15:10.371 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:10.371 "is_configured": true, 00:15:10.371 "data_offset": 2048, 00:15:10.371 "data_size": 63488 00:15:10.371 } 00:15:10.371 ] 00:15:10.371 }' 00:15:10.371 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.371 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.371 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.371 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.371 05:01:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.309 "name": "raid_bdev1", 00:15:11.309 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:11.309 "strip_size_kb": 64, 00:15:11.309 "state": "online", 00:15:11.309 "raid_level": "raid5f", 00:15:11.309 "superblock": true, 00:15:11.309 "num_base_bdevs": 4, 00:15:11.309 "num_base_bdevs_discovered": 4, 00:15:11.309 "num_base_bdevs_operational": 4, 00:15:11.309 "process": { 00:15:11.309 "type": "rebuild", 00:15:11.309 "target": "spare", 00:15:11.309 "progress": { 00:15:11.309 "blocks": 151680, 00:15:11.309 "percent": 79 00:15:11.309 } 00:15:11.309 }, 00:15:11.309 "base_bdevs_list": [ 00:15:11.309 { 00:15:11.309 "name": "spare", 00:15:11.309 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:11.309 "is_configured": true, 00:15:11.309 "data_offset": 2048, 00:15:11.309 "data_size": 63488 00:15:11.309 }, 00:15:11.309 { 00:15:11.309 "name": "BaseBdev2", 00:15:11.309 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:11.309 "is_configured": true, 00:15:11.309 "data_offset": 2048, 00:15:11.309 "data_size": 63488 00:15:11.309 }, 00:15:11.309 { 00:15:11.309 "name": "BaseBdev3", 00:15:11.309 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:11.309 "is_configured": true, 00:15:11.309 "data_offset": 2048, 00:15:11.309 "data_size": 63488 00:15:11.309 }, 00:15:11.309 { 00:15:11.309 "name": "BaseBdev4", 00:15:11.309 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:11.309 "is_configured": true, 00:15:11.309 "data_offset": 2048, 00:15:11.309 "data_size": 63488 00:15:11.309 } 00:15:11.309 ] 00:15:11.309 }' 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.309 05:01:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.247 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.248 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.507 05:01:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.507 "name": "raid_bdev1", 00:15:12.507 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:12.507 "strip_size_kb": 64, 00:15:12.507 "state": "online", 00:15:12.507 "raid_level": "raid5f", 00:15:12.507 "superblock": true, 00:15:12.507 "num_base_bdevs": 4, 00:15:12.507 "num_base_bdevs_discovered": 4, 00:15:12.507 "num_base_bdevs_operational": 4, 00:15:12.507 "process": { 00:15:12.507 "type": "rebuild", 00:15:12.507 "target": "spare", 00:15:12.507 "progress": { 00:15:12.507 "blocks": 174720, 00:15:12.507 "percent": 91 00:15:12.507 } 00:15:12.507 }, 00:15:12.507 "base_bdevs_list": [ 00:15:12.507 { 00:15:12.507 "name": "spare", 00:15:12.507 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:12.507 "is_configured": true, 00:15:12.507 "data_offset": 2048, 00:15:12.507 "data_size": 63488 00:15:12.507 }, 00:15:12.507 { 00:15:12.507 "name": "BaseBdev2", 00:15:12.507 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:12.507 "is_configured": true, 00:15:12.507 "data_offset": 2048, 00:15:12.507 "data_size": 63488 00:15:12.507 }, 00:15:12.507 { 00:15:12.507 "name": "BaseBdev3", 00:15:12.507 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:12.507 "is_configured": true, 00:15:12.507 "data_offset": 2048, 00:15:12.507 "data_size": 63488 00:15:12.507 }, 00:15:12.507 { 00:15:12.507 "name": "BaseBdev4", 00:15:12.507 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:12.507 "is_configured": true, 00:15:12.507 "data_offset": 2048, 00:15:12.507 "data_size": 63488 00:15:12.507 } 00:15:12.507 ] 00:15:12.507 }' 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.507 05:01:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.445 [2024-11-21 05:01:29.856016] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:13.445 [2024-11-21 05:01:29.856098] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:13.445 [2024-11-21 05:01:29.856224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.445 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.445 "name": "raid_bdev1", 00:15:13.445 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:13.445 "strip_size_kb": 64, 00:15:13.445 "state": "online", 00:15:13.445 "raid_level": "raid5f", 00:15:13.445 "superblock": true, 00:15:13.445 "num_base_bdevs": 4, 00:15:13.445 "num_base_bdevs_discovered": 4, 00:15:13.445 "num_base_bdevs_operational": 4, 00:15:13.445 "base_bdevs_list": [ 00:15:13.445 { 00:15:13.445 "name": "spare", 00:15:13.445 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:13.445 "is_configured": true, 00:15:13.445 "data_offset": 2048, 00:15:13.445 "data_size": 63488 00:15:13.445 }, 00:15:13.445 { 00:15:13.445 "name": "BaseBdev2", 00:15:13.446 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:13.446 "is_configured": true, 00:15:13.446 "data_offset": 2048, 00:15:13.446 "data_size": 63488 00:15:13.446 }, 00:15:13.446 { 00:15:13.446 "name": "BaseBdev3", 00:15:13.446 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:13.446 "is_configured": true, 00:15:13.446 "data_offset": 2048, 00:15:13.446 "data_size": 63488 00:15:13.446 }, 00:15:13.446 { 00:15:13.446 "name": "BaseBdev4", 00:15:13.446 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:13.446 "is_configured": true, 00:15:13.446 "data_offset": 2048, 00:15:13.446 "data_size": 63488 00:15:13.446 } 00:15:13.446 ] 00:15:13.446 }' 00:15:13.446 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.706 "name": "raid_bdev1", 00:15:13.706 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:13.706 "strip_size_kb": 64, 00:15:13.706 "state": "online", 00:15:13.706 "raid_level": "raid5f", 00:15:13.706 "superblock": true, 00:15:13.706 "num_base_bdevs": 4, 00:15:13.706 "num_base_bdevs_discovered": 4, 00:15:13.706 "num_base_bdevs_operational": 4, 00:15:13.706 "base_bdevs_list": [ 00:15:13.706 { 00:15:13.706 "name": "spare", 00:15:13.706 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:13.706 "is_configured": true, 00:15:13.706 "data_offset": 2048, 00:15:13.706 "data_size": 63488 00:15:13.706 }, 00:15:13.706 { 00:15:13.706 "name": "BaseBdev2", 00:15:13.706 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:13.706 "is_configured": true, 00:15:13.706 "data_offset": 2048, 00:15:13.706 "data_size": 63488 00:15:13.706 }, 00:15:13.706 { 00:15:13.706 "name": "BaseBdev3", 00:15:13.706 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:13.706 "is_configured": true, 00:15:13.706 "data_offset": 2048, 00:15:13.706 "data_size": 63488 00:15:13.706 }, 00:15:13.706 { 00:15:13.706 "name": "BaseBdev4", 00:15:13.706 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:13.706 "is_configured": true, 00:15:13.706 "data_offset": 2048, 00:15:13.706 "data_size": 63488 00:15:13.706 } 00:15:13.706 ] 00:15:13.706 }' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.706 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.966 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.966 "name": "raid_bdev1", 00:15:13.966 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:13.966 "strip_size_kb": 64, 00:15:13.966 "state": "online", 00:15:13.966 "raid_level": "raid5f", 00:15:13.966 "superblock": true, 00:15:13.966 "num_base_bdevs": 4, 00:15:13.966 "num_base_bdevs_discovered": 4, 00:15:13.966 "num_base_bdevs_operational": 4, 00:15:13.966 "base_bdevs_list": [ 00:15:13.966 { 00:15:13.966 "name": "spare", 00:15:13.966 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:13.966 "is_configured": true, 00:15:13.966 "data_offset": 2048, 00:15:13.966 "data_size": 63488 00:15:13.966 }, 00:15:13.966 { 00:15:13.966 "name": "BaseBdev2", 00:15:13.966 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:13.966 "is_configured": true, 00:15:13.966 "data_offset": 2048, 00:15:13.966 "data_size": 63488 00:15:13.966 }, 00:15:13.966 { 00:15:13.966 "name": "BaseBdev3", 00:15:13.966 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:13.966 "is_configured": true, 00:15:13.966 "data_offset": 2048, 00:15:13.966 "data_size": 63488 00:15:13.966 }, 00:15:13.966 { 00:15:13.966 "name": "BaseBdev4", 00:15:13.966 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:13.966 "is_configured": true, 00:15:13.966 "data_offset": 2048, 00:15:13.966 "data_size": 63488 00:15:13.966 } 00:15:13.966 ] 00:15:13.966 }' 00:15:13.966 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.966 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.225 [2024-11-21 05:01:30.771928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.225 [2024-11-21 05:01:30.771957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.225 [2024-11-21 05:01:30.772043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.225 [2024-11-21 05:01:30.772143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.225 [2024-11-21 05:01:30.772159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.225 05:01:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:14.484 /dev/nbd0 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.484 1+0 records in 00:15:14.484 1+0 records out 00:15:14.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570199 s, 7.2 MB/s 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.484 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:14.744 /dev/nbd1 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.744 1+0 records in 00:15:14.744 1+0 records out 00:15:14.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246663 s, 16.6 MB/s 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.744 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:14.745 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.745 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.004 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.264 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.264 [2024-11-21 05:01:31.811316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.264 [2024-11-21 05:01:31.811375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.265 [2024-11-21 05:01:31.811410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:15.265 [2024-11-21 05:01:31.811420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.265 [2024-11-21 05:01:31.813665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.265 [2024-11-21 05:01:31.813707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.265 [2024-11-21 05:01:31.813784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.265 [2024-11-21 05:01:31.813825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.265 [2024-11-21 05:01:31.813926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.265 [2024-11-21 05:01:31.814004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.265 [2024-11-21 05:01:31.814067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.265 spare 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.265 [2024-11-21 05:01:31.913991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:15.265 [2024-11-21 05:01:31.914031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:15.265 [2024-11-21 05:01:31.914328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:15.265 [2024-11-21 05:01:31.914802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:15.265 [2024-11-21 05:01:31.914827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:15.265 [2024-11-21 05:01:31.914964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.265 "name": "raid_bdev1", 00:15:15.265 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:15.265 "strip_size_kb": 64, 00:15:15.265 "state": "online", 00:15:15.265 "raid_level": "raid5f", 00:15:15.265 "superblock": true, 00:15:15.265 "num_base_bdevs": 4, 00:15:15.265 "num_base_bdevs_discovered": 4, 00:15:15.265 "num_base_bdevs_operational": 4, 00:15:15.265 "base_bdevs_list": [ 00:15:15.265 { 00:15:15.265 "name": "spare", 00:15:15.265 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:15.265 "is_configured": true, 00:15:15.265 "data_offset": 2048, 00:15:15.265 "data_size": 63488 00:15:15.265 }, 00:15:15.265 { 00:15:15.265 "name": "BaseBdev2", 00:15:15.265 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:15.265 "is_configured": true, 00:15:15.265 "data_offset": 2048, 00:15:15.265 "data_size": 63488 00:15:15.265 }, 00:15:15.265 { 00:15:15.265 "name": "BaseBdev3", 00:15:15.265 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:15.265 "is_configured": true, 00:15:15.265 "data_offset": 2048, 00:15:15.265 "data_size": 63488 00:15:15.265 }, 00:15:15.265 { 00:15:15.265 "name": "BaseBdev4", 00:15:15.265 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:15.265 "is_configured": true, 00:15:15.265 "data_offset": 2048, 00:15:15.265 "data_size": 63488 00:15:15.265 } 00:15:15.265 ] 00:15:15.265 }' 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.265 05:01:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.834 "name": "raid_bdev1", 00:15:15.834 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:15.834 "strip_size_kb": 64, 00:15:15.834 "state": "online", 00:15:15.834 "raid_level": "raid5f", 00:15:15.834 "superblock": true, 00:15:15.834 "num_base_bdevs": 4, 00:15:15.834 "num_base_bdevs_discovered": 4, 00:15:15.834 "num_base_bdevs_operational": 4, 00:15:15.834 "base_bdevs_list": [ 00:15:15.834 { 00:15:15.834 "name": "spare", 00:15:15.834 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:15.834 "is_configured": true, 00:15:15.834 "data_offset": 2048, 00:15:15.834 "data_size": 63488 00:15:15.834 }, 00:15:15.834 { 00:15:15.834 "name": "BaseBdev2", 00:15:15.834 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:15.834 "is_configured": true, 00:15:15.834 "data_offset": 2048, 00:15:15.834 "data_size": 63488 00:15:15.834 }, 00:15:15.834 { 00:15:15.834 "name": "BaseBdev3", 00:15:15.834 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:15.834 "is_configured": true, 00:15:15.834 "data_offset": 2048, 00:15:15.834 "data_size": 63488 00:15:15.834 }, 00:15:15.834 { 00:15:15.834 "name": "BaseBdev4", 00:15:15.834 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:15.834 "is_configured": true, 00:15:15.834 "data_offset": 2048, 00:15:15.834 "data_size": 63488 00:15:15.834 } 00:15:15.834 ] 00:15:15.834 }' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.834 [2024-11-21 05:01:32.527610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.834 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.835 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.094 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.094 "name": "raid_bdev1", 00:15:16.094 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:16.094 "strip_size_kb": 64, 00:15:16.094 "state": "online", 00:15:16.094 "raid_level": "raid5f", 00:15:16.094 "superblock": true, 00:15:16.094 "num_base_bdevs": 4, 00:15:16.094 "num_base_bdevs_discovered": 3, 00:15:16.094 "num_base_bdevs_operational": 3, 00:15:16.094 "base_bdevs_list": [ 00:15:16.094 { 00:15:16.094 "name": null, 00:15:16.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.094 "is_configured": false, 00:15:16.094 "data_offset": 0, 00:15:16.094 "data_size": 63488 00:15:16.094 }, 00:15:16.094 { 00:15:16.094 "name": "BaseBdev2", 00:15:16.094 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:16.094 "is_configured": true, 00:15:16.094 "data_offset": 2048, 00:15:16.094 "data_size": 63488 00:15:16.094 }, 00:15:16.094 { 00:15:16.094 "name": "BaseBdev3", 00:15:16.094 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:16.094 "is_configured": true, 00:15:16.094 "data_offset": 2048, 00:15:16.094 "data_size": 63488 00:15:16.094 }, 00:15:16.094 { 00:15:16.094 "name": "BaseBdev4", 00:15:16.094 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:16.094 "is_configured": true, 00:15:16.094 "data_offset": 2048, 00:15:16.094 "data_size": 63488 00:15:16.094 } 00:15:16.094 ] 00:15:16.094 }' 00:15:16.094 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.094 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.354 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.354 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.354 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.354 [2024-11-21 05:01:32.954967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.354 [2024-11-21 05:01:32.955194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.354 [2024-11-21 05:01:32.955282] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.354 [2024-11-21 05:01:32.955365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.354 [2024-11-21 05:01:32.959363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:16.354 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.354 05:01:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:16.354 [2024-11-21 05:01:32.961647] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.292 05:01:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.292 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.292 "name": "raid_bdev1", 00:15:17.292 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:17.292 "strip_size_kb": 64, 00:15:17.292 "state": "online", 00:15:17.292 "raid_level": "raid5f", 00:15:17.292 "superblock": true, 00:15:17.292 "num_base_bdevs": 4, 00:15:17.292 "num_base_bdevs_discovered": 4, 00:15:17.292 "num_base_bdevs_operational": 4, 00:15:17.292 "process": { 00:15:17.292 "type": "rebuild", 00:15:17.292 "target": "spare", 00:15:17.292 "progress": { 00:15:17.292 "blocks": 19200, 00:15:17.292 "percent": 10 00:15:17.292 } 00:15:17.292 }, 00:15:17.292 "base_bdevs_list": [ 00:15:17.292 { 00:15:17.292 "name": "spare", 00:15:17.292 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 2048, 00:15:17.292 "data_size": 63488 00:15:17.292 }, 00:15:17.292 { 00:15:17.292 "name": "BaseBdev2", 00:15:17.292 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 2048, 00:15:17.292 "data_size": 63488 00:15:17.292 }, 00:15:17.292 { 00:15:17.292 "name": "BaseBdev3", 00:15:17.292 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 2048, 00:15:17.292 "data_size": 63488 00:15:17.292 }, 00:15:17.292 { 00:15:17.292 "name": "BaseBdev4", 00:15:17.292 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 2048, 00:15:17.292 "data_size": 63488 00:15:17.292 } 00:15:17.292 ] 00:15:17.292 }' 00:15:17.292 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.551 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.551 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.551 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.551 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.551 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.552 [2024-11-21 05:01:34.126160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.552 [2024-11-21 05:01:34.166660] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.552 [2024-11-21 05:01:34.166710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.552 [2024-11-21 05:01:34.166727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.552 [2024-11-21 05:01:34.166734] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.552 "name": "raid_bdev1", 00:15:17.552 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:17.552 "strip_size_kb": 64, 00:15:17.552 "state": "online", 00:15:17.552 "raid_level": "raid5f", 00:15:17.552 "superblock": true, 00:15:17.552 "num_base_bdevs": 4, 00:15:17.552 "num_base_bdevs_discovered": 3, 00:15:17.552 "num_base_bdevs_operational": 3, 00:15:17.552 "base_bdevs_list": [ 00:15:17.552 { 00:15:17.552 "name": null, 00:15:17.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.552 "is_configured": false, 00:15:17.552 "data_offset": 0, 00:15:17.552 "data_size": 63488 00:15:17.552 }, 00:15:17.552 { 00:15:17.552 "name": "BaseBdev2", 00:15:17.552 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:17.552 "is_configured": true, 00:15:17.552 "data_offset": 2048, 00:15:17.552 "data_size": 63488 00:15:17.552 }, 00:15:17.552 { 00:15:17.552 "name": "BaseBdev3", 00:15:17.552 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:17.552 "is_configured": true, 00:15:17.552 "data_offset": 2048, 00:15:17.552 "data_size": 63488 00:15:17.552 }, 00:15:17.552 { 00:15:17.552 "name": "BaseBdev4", 00:15:17.552 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:17.552 "is_configured": true, 00:15:17.552 "data_offset": 2048, 00:15:17.552 "data_size": 63488 00:15:17.552 } 00:15:17.552 ] 00:15:17.552 }' 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.552 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.121 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.121 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.121 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.121 [2024-11-21 05:01:34.651218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.121 [2024-11-21 05:01:34.651339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.121 [2024-11-21 05:01:34.651424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:18.121 [2024-11-21 05:01:34.651473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.121 [2024-11-21 05:01:34.651937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.121 [2024-11-21 05:01:34.652001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.121 [2024-11-21 05:01:34.652154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:18.121 [2024-11-21 05:01:34.652202] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.121 [2024-11-21 05:01:34.652291] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:18.121 [2024-11-21 05:01:34.652352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.121 [2024-11-21 05:01:34.656004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:18.121 spare 00:15:18.121 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.121 05:01:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:18.121 [2024-11-21 05:01:34.658204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.059 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.059 "name": "raid_bdev1", 00:15:19.059 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:19.059 "strip_size_kb": 64, 00:15:19.059 "state": "online", 00:15:19.059 "raid_level": "raid5f", 00:15:19.059 "superblock": true, 00:15:19.059 "num_base_bdevs": 4, 00:15:19.059 "num_base_bdevs_discovered": 4, 00:15:19.059 "num_base_bdevs_operational": 4, 00:15:19.059 "process": { 00:15:19.059 "type": "rebuild", 00:15:19.059 "target": "spare", 00:15:19.059 "progress": { 00:15:19.059 "blocks": 19200, 00:15:19.059 "percent": 10 00:15:19.059 } 00:15:19.059 }, 00:15:19.059 "base_bdevs_list": [ 00:15:19.060 { 00:15:19.060 "name": "spare", 00:15:19.060 "uuid": "65763b3c-f641-5555-9da1-cdb2ed1585a1", 00:15:19.060 "is_configured": true, 00:15:19.060 "data_offset": 2048, 00:15:19.060 "data_size": 63488 00:15:19.060 }, 00:15:19.060 { 00:15:19.060 "name": "BaseBdev2", 00:15:19.060 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:19.060 "is_configured": true, 00:15:19.060 "data_offset": 2048, 00:15:19.060 "data_size": 63488 00:15:19.060 }, 00:15:19.060 { 00:15:19.060 "name": "BaseBdev3", 00:15:19.060 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:19.060 "is_configured": true, 00:15:19.060 "data_offset": 2048, 00:15:19.060 "data_size": 63488 00:15:19.060 }, 00:15:19.060 { 00:15:19.060 "name": "BaseBdev4", 00:15:19.060 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:19.060 "is_configured": true, 00:15:19.060 "data_offset": 2048, 00:15:19.060 "data_size": 63488 00:15:19.060 } 00:15:19.060 ] 00:15:19.060 }' 00:15:19.060 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.060 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.060 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.320 [2024-11-21 05:01:35.798201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.320 [2024-11-21 05:01:35.863221] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.320 [2024-11-21 05:01:35.863272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.320 [2024-11-21 05:01:35.863304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.320 [2024-11-21 05:01:35.863313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.320 "name": "raid_bdev1", 00:15:19.320 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:19.320 "strip_size_kb": 64, 00:15:19.320 "state": "online", 00:15:19.320 "raid_level": "raid5f", 00:15:19.320 "superblock": true, 00:15:19.320 "num_base_bdevs": 4, 00:15:19.320 "num_base_bdevs_discovered": 3, 00:15:19.320 "num_base_bdevs_operational": 3, 00:15:19.320 "base_bdevs_list": [ 00:15:19.320 { 00:15:19.320 "name": null, 00:15:19.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.320 "is_configured": false, 00:15:19.320 "data_offset": 0, 00:15:19.320 "data_size": 63488 00:15:19.320 }, 00:15:19.320 { 00:15:19.320 "name": "BaseBdev2", 00:15:19.320 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:19.320 "is_configured": true, 00:15:19.320 "data_offset": 2048, 00:15:19.320 "data_size": 63488 00:15:19.320 }, 00:15:19.320 { 00:15:19.320 "name": "BaseBdev3", 00:15:19.320 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:19.320 "is_configured": true, 00:15:19.320 "data_offset": 2048, 00:15:19.320 "data_size": 63488 00:15:19.320 }, 00:15:19.320 { 00:15:19.320 "name": "BaseBdev4", 00:15:19.320 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:19.320 "is_configured": true, 00:15:19.320 "data_offset": 2048, 00:15:19.320 "data_size": 63488 00:15:19.320 } 00:15:19.320 ] 00:15:19.320 }' 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.320 05:01:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.888 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.889 "name": "raid_bdev1", 00:15:19.889 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:19.889 "strip_size_kb": 64, 00:15:19.889 "state": "online", 00:15:19.889 "raid_level": "raid5f", 00:15:19.889 "superblock": true, 00:15:19.889 "num_base_bdevs": 4, 00:15:19.889 "num_base_bdevs_discovered": 3, 00:15:19.889 "num_base_bdevs_operational": 3, 00:15:19.889 "base_bdevs_list": [ 00:15:19.889 { 00:15:19.889 "name": null, 00:15:19.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.889 "is_configured": false, 00:15:19.889 "data_offset": 0, 00:15:19.889 "data_size": 63488 00:15:19.889 }, 00:15:19.889 { 00:15:19.889 "name": "BaseBdev2", 00:15:19.889 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:19.889 "is_configured": true, 00:15:19.889 "data_offset": 2048, 00:15:19.889 "data_size": 63488 00:15:19.889 }, 00:15:19.889 { 00:15:19.889 "name": "BaseBdev3", 00:15:19.889 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:19.889 "is_configured": true, 00:15:19.889 "data_offset": 2048, 00:15:19.889 "data_size": 63488 00:15:19.889 }, 00:15:19.889 { 00:15:19.889 "name": "BaseBdev4", 00:15:19.889 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:19.889 "is_configured": true, 00:15:19.889 "data_offset": 2048, 00:15:19.889 "data_size": 63488 00:15:19.889 } 00:15:19.889 ] 00:15:19.889 }' 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.889 [2024-11-21 05:01:36.499486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.889 [2024-11-21 05:01:36.499547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.889 [2024-11-21 05:01:36.499582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:19.889 [2024-11-21 05:01:36.499595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.889 [2024-11-21 05:01:36.500002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.889 [2024-11-21 05:01:36.500026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.889 [2024-11-21 05:01:36.500108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.889 [2024-11-21 05:01:36.500128] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.889 [2024-11-21 05:01:36.500136] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.889 [2024-11-21 05:01:36.500147] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.889 BaseBdev1 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.889 05:01:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.828 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.088 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.088 "name": "raid_bdev1", 00:15:21.088 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:21.088 "strip_size_kb": 64, 00:15:21.088 "state": "online", 00:15:21.088 "raid_level": "raid5f", 00:15:21.088 "superblock": true, 00:15:21.088 "num_base_bdevs": 4, 00:15:21.088 "num_base_bdevs_discovered": 3, 00:15:21.088 "num_base_bdevs_operational": 3, 00:15:21.088 "base_bdevs_list": [ 00:15:21.088 { 00:15:21.088 "name": null, 00:15:21.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.088 "is_configured": false, 00:15:21.088 "data_offset": 0, 00:15:21.088 "data_size": 63488 00:15:21.088 }, 00:15:21.088 { 00:15:21.088 "name": "BaseBdev2", 00:15:21.088 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:21.088 "is_configured": true, 00:15:21.088 "data_offset": 2048, 00:15:21.088 "data_size": 63488 00:15:21.088 }, 00:15:21.088 { 00:15:21.088 "name": "BaseBdev3", 00:15:21.088 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:21.088 "is_configured": true, 00:15:21.088 "data_offset": 2048, 00:15:21.088 "data_size": 63488 00:15:21.088 }, 00:15:21.088 { 00:15:21.088 "name": "BaseBdev4", 00:15:21.088 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:21.088 "is_configured": true, 00:15:21.088 "data_offset": 2048, 00:15:21.088 "data_size": 63488 00:15:21.088 } 00:15:21.088 ] 00:15:21.088 }' 00:15:21.088 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.088 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.348 05:01:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.348 "name": "raid_bdev1", 00:15:21.348 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:21.348 "strip_size_kb": 64, 00:15:21.348 "state": "online", 00:15:21.348 "raid_level": "raid5f", 00:15:21.348 "superblock": true, 00:15:21.348 "num_base_bdevs": 4, 00:15:21.348 "num_base_bdevs_discovered": 3, 00:15:21.348 "num_base_bdevs_operational": 3, 00:15:21.348 "base_bdevs_list": [ 00:15:21.348 { 00:15:21.348 "name": null, 00:15:21.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.348 "is_configured": false, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 63488 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev2", 00:15:21.348 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 2048, 00:15:21.348 "data_size": 63488 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev3", 00:15:21.348 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 2048, 00:15:21.348 "data_size": 63488 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev4", 00:15:21.348 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 2048, 00:15:21.348 "data_size": 63488 00:15:21.348 } 00:15:21.348 ] 00:15:21.348 }' 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:21.348 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.608 [2024-11-21 05:01:38.092747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.608 [2024-11-21 05:01:38.092905] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.608 [2024-11-21 05:01:38.092916] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:21.608 request: 00:15:21.608 { 00:15:21.608 "base_bdev": "BaseBdev1", 00:15:21.608 "raid_bdev": "raid_bdev1", 00:15:21.608 "method": "bdev_raid_add_base_bdev", 00:15:21.608 "req_id": 1 00:15:21.608 } 00:15:21.608 Got JSON-RPC error response 00:15:21.608 response: 00:15:21.608 { 00:15:21.608 "code": -22, 00:15:21.608 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:21.608 } 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.608 05:01:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.546 "name": "raid_bdev1", 00:15:22.546 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:22.546 "strip_size_kb": 64, 00:15:22.546 "state": "online", 00:15:22.546 "raid_level": "raid5f", 00:15:22.546 "superblock": true, 00:15:22.546 "num_base_bdevs": 4, 00:15:22.546 "num_base_bdevs_discovered": 3, 00:15:22.546 "num_base_bdevs_operational": 3, 00:15:22.546 "base_bdevs_list": [ 00:15:22.546 { 00:15:22.546 "name": null, 00:15:22.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.546 "is_configured": false, 00:15:22.546 "data_offset": 0, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": "BaseBdev2", 00:15:22.546 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": "BaseBdev3", 00:15:22.546 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": "BaseBdev4", 00:15:22.546 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 } 00:15:22.546 ] 00:15:22.546 }' 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.546 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.805 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.065 "name": "raid_bdev1", 00:15:23.065 "uuid": "dbf6434a-7245-4801-a34d-1dd39047ef80", 00:15:23.065 "strip_size_kb": 64, 00:15:23.065 "state": "online", 00:15:23.065 "raid_level": "raid5f", 00:15:23.065 "superblock": true, 00:15:23.065 "num_base_bdevs": 4, 00:15:23.065 "num_base_bdevs_discovered": 3, 00:15:23.065 "num_base_bdevs_operational": 3, 00:15:23.065 "base_bdevs_list": [ 00:15:23.065 { 00:15:23.065 "name": null, 00:15:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.065 "is_configured": false, 00:15:23.065 "data_offset": 0, 00:15:23.065 "data_size": 63488 00:15:23.065 }, 00:15:23.065 { 00:15:23.065 "name": "BaseBdev2", 00:15:23.065 "uuid": "c2b0e4bd-fe7f-50a5-9dda-97c2549c902a", 00:15:23.065 "is_configured": true, 00:15:23.065 "data_offset": 2048, 00:15:23.065 "data_size": 63488 00:15:23.065 }, 00:15:23.065 { 00:15:23.065 "name": "BaseBdev3", 00:15:23.065 "uuid": "54fcf14b-08de-58bb-9bed-5be7b30aad87", 00:15:23.065 "is_configured": true, 00:15:23.065 "data_offset": 2048, 00:15:23.065 "data_size": 63488 00:15:23.065 }, 00:15:23.065 { 00:15:23.065 "name": "BaseBdev4", 00:15:23.065 "uuid": "e71da09a-b4ae-5918-ae01-ebd63d64cef3", 00:15:23.065 "is_configured": true, 00:15:23.065 "data_offset": 2048, 00:15:23.065 "data_size": 63488 00:15:23.065 } 00:15:23.065 ] 00:15:23.065 }' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95671 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95671 ']' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 95671 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95671 00:15:23.065 killing process with pid 95671 00:15:23.065 Received shutdown signal, test time was about 60.000000 seconds 00:15:23.065 00:15:23.065 Latency(us) 00:15:23.065 [2024-11-21T05:01:39.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.065 [2024-11-21T05:01:39.800Z] =================================================================================================================== 00:15:23.065 [2024-11-21T05:01:39.800Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95671' 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 95671 00:15:23.065 [2024-11-21 05:01:39.701507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.065 [2024-11-21 05:01:39.701621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.065 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 95671 00:15:23.065 [2024-11-21 05:01:39.701694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.065 [2024-11-21 05:01:39.701705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:23.065 [2024-11-21 05:01:39.753867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.325 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:23.325 00:15:23.325 real 0m25.275s 00:15:23.325 user 0m31.716s 00:15:23.325 sys 0m3.377s 00:15:23.325 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.325 ************************************ 00:15:23.325 END TEST raid5f_rebuild_test_sb 00:15:23.325 ************************************ 00:15:23.326 05:01:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.326 05:01:40 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:23.326 05:01:40 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:23.326 05:01:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.326 05:01:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.326 05:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.326 ************************************ 00:15:23.326 START TEST raid_state_function_test_sb_4k 00:15:23.326 ************************************ 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96482 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96482' 00:15:23.326 Process raid pid: 96482 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96482 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96482 ']' 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.326 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.586 [2024-11-21 05:01:40.145857] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:23.587 [2024-11-21 05:01:40.146055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.846 [2024-11-21 05:01:40.328807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.846 [2024-11-21 05:01:40.355713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.846 [2024-11-21 05:01:40.399388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.846 [2024-11-21 05:01:40.399527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 [2024-11-21 05:01:40.976925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.416 [2024-11-21 05:01:40.976981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.416 [2024-11-21 05:01:40.976992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.416 [2024-11-21 05:01:40.977002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.416 05:01:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.416 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.416 "name": "Existed_Raid", 00:15:24.416 "uuid": "0956bc93-c488-4352-aa10-b189a9b643f7", 00:15:24.416 "strip_size_kb": 0, 00:15:24.416 "state": "configuring", 00:15:24.416 "raid_level": "raid1", 00:15:24.416 "superblock": true, 00:15:24.416 "num_base_bdevs": 2, 00:15:24.416 "num_base_bdevs_discovered": 0, 00:15:24.416 "num_base_bdevs_operational": 2, 00:15:24.416 "base_bdevs_list": [ 00:15:24.416 { 00:15:24.416 "name": "BaseBdev1", 00:15:24.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.416 "is_configured": false, 00:15:24.416 "data_offset": 0, 00:15:24.416 "data_size": 0 00:15:24.416 }, 00:15:24.416 { 00:15:24.416 "name": "BaseBdev2", 00:15:24.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.416 "is_configured": false, 00:15:24.416 "data_offset": 0, 00:15:24.416 "data_size": 0 00:15:24.416 } 00:15:24.416 ] 00:15:24.416 }' 00:15:24.416 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.416 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 [2024-11-21 05:01:41.436069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.985 [2024-11-21 05:01:41.436175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 [2024-11-21 05:01:41.448056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.985 [2024-11-21 05:01:41.448181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.985 [2024-11-21 05:01:41.448222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.985 [2024-11-21 05:01:41.448285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 [2024-11-21 05:01:41.469142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.985 BaseBdev1 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.985 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.985 [ 00:15:24.985 { 00:15:24.985 "name": "BaseBdev1", 00:15:24.985 "aliases": [ 00:15:24.985 "064f77dc-810f-420a-abc0-4984864f2a93" 00:15:24.985 ], 00:15:24.985 "product_name": "Malloc disk", 00:15:24.985 "block_size": 4096, 00:15:24.985 "num_blocks": 8192, 00:15:24.985 "uuid": "064f77dc-810f-420a-abc0-4984864f2a93", 00:15:24.985 "assigned_rate_limits": { 00:15:24.985 "rw_ios_per_sec": 0, 00:15:24.985 "rw_mbytes_per_sec": 0, 00:15:24.985 "r_mbytes_per_sec": 0, 00:15:24.986 "w_mbytes_per_sec": 0 00:15:24.986 }, 00:15:24.986 "claimed": true, 00:15:24.986 "claim_type": "exclusive_write", 00:15:24.986 "zoned": false, 00:15:24.986 "supported_io_types": { 00:15:24.986 "read": true, 00:15:24.986 "write": true, 00:15:24.986 "unmap": true, 00:15:24.986 "flush": true, 00:15:24.986 "reset": true, 00:15:24.986 "nvme_admin": false, 00:15:24.986 "nvme_io": false, 00:15:24.986 "nvme_io_md": false, 00:15:24.986 "write_zeroes": true, 00:15:24.986 "zcopy": true, 00:15:24.986 "get_zone_info": false, 00:15:24.986 "zone_management": false, 00:15:24.986 "zone_append": false, 00:15:24.986 "compare": false, 00:15:24.986 "compare_and_write": false, 00:15:24.986 "abort": true, 00:15:24.986 "seek_hole": false, 00:15:24.986 "seek_data": false, 00:15:24.986 "copy": true, 00:15:24.986 "nvme_iov_md": false 00:15:24.986 }, 00:15:24.986 "memory_domains": [ 00:15:24.986 { 00:15:24.986 "dma_device_id": "system", 00:15:24.986 "dma_device_type": 1 00:15:24.986 }, 00:15:24.986 { 00:15:24.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.986 "dma_device_type": 2 00:15:24.986 } 00:15:24.986 ], 00:15:24.986 "driver_specific": {} 00:15:24.986 } 00:15:24.986 ] 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.986 "name": "Existed_Raid", 00:15:24.986 "uuid": "a626a43e-10ac-47bb-a47d-376823200f3d", 00:15:24.986 "strip_size_kb": 0, 00:15:24.986 "state": "configuring", 00:15:24.986 "raid_level": "raid1", 00:15:24.986 "superblock": true, 00:15:24.986 "num_base_bdevs": 2, 00:15:24.986 "num_base_bdevs_discovered": 1, 00:15:24.986 "num_base_bdevs_operational": 2, 00:15:24.986 "base_bdevs_list": [ 00:15:24.986 { 00:15:24.986 "name": "BaseBdev1", 00:15:24.986 "uuid": "064f77dc-810f-420a-abc0-4984864f2a93", 00:15:24.986 "is_configured": true, 00:15:24.986 "data_offset": 256, 00:15:24.986 "data_size": 7936 00:15:24.986 }, 00:15:24.986 { 00:15:24.986 "name": "BaseBdev2", 00:15:24.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.986 "is_configured": false, 00:15:24.986 "data_offset": 0, 00:15:24.986 "data_size": 0 00:15:24.986 } 00:15:24.986 ] 00:15:24.986 }' 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.986 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.244 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.244 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.244 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.508 [2024-11-21 05:01:41.976253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.508 [2024-11-21 05:01:41.976298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.508 [2024-11-21 05:01:41.984281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.508 [2024-11-21 05:01:41.986131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.508 [2024-11-21 05:01:41.986213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.508 05:01:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.508 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.508 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.508 "name": "Existed_Raid", 00:15:25.508 "uuid": "39063c17-32f2-408b-bee4-8c02032b5130", 00:15:25.508 "strip_size_kb": 0, 00:15:25.508 "state": "configuring", 00:15:25.508 "raid_level": "raid1", 00:15:25.508 "superblock": true, 00:15:25.508 "num_base_bdevs": 2, 00:15:25.508 "num_base_bdevs_discovered": 1, 00:15:25.508 "num_base_bdevs_operational": 2, 00:15:25.508 "base_bdevs_list": [ 00:15:25.508 { 00:15:25.508 "name": "BaseBdev1", 00:15:25.508 "uuid": "064f77dc-810f-420a-abc0-4984864f2a93", 00:15:25.508 "is_configured": true, 00:15:25.508 "data_offset": 256, 00:15:25.509 "data_size": 7936 00:15:25.509 }, 00:15:25.509 { 00:15:25.509 "name": "BaseBdev2", 00:15:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.509 "is_configured": false, 00:15:25.509 "data_offset": 0, 00:15:25.509 "data_size": 0 00:15:25.509 } 00:15:25.509 ] 00:15:25.509 }' 00:15:25.509 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.509 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.805 [2024-11-21 05:01:42.474509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.805 [2024-11-21 05:01:42.474699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:25.805 [2024-11-21 05:01:42.474714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:25.805 [2024-11-21 05:01:42.474950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:25.805 [2024-11-21 05:01:42.475076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:25.805 [2024-11-21 05:01:42.475134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:25.805 [2024-11-21 05:01:42.475259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.805 BaseBdev2 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.805 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.805 [ 00:15:25.805 { 00:15:25.805 "name": "BaseBdev2", 00:15:25.805 "aliases": [ 00:15:25.805 "6c34ceea-1c11-4d6e-8c81-2fb75d9e25db" 00:15:25.805 ], 00:15:25.805 "product_name": "Malloc disk", 00:15:25.805 "block_size": 4096, 00:15:25.805 "num_blocks": 8192, 00:15:25.805 "uuid": "6c34ceea-1c11-4d6e-8c81-2fb75d9e25db", 00:15:25.805 "assigned_rate_limits": { 00:15:25.805 "rw_ios_per_sec": 0, 00:15:25.805 "rw_mbytes_per_sec": 0, 00:15:25.805 "r_mbytes_per_sec": 0, 00:15:25.805 "w_mbytes_per_sec": 0 00:15:25.805 }, 00:15:25.805 "claimed": true, 00:15:25.805 "claim_type": "exclusive_write", 00:15:25.805 "zoned": false, 00:15:25.805 "supported_io_types": { 00:15:25.805 "read": true, 00:15:25.805 "write": true, 00:15:25.805 "unmap": true, 00:15:25.805 "flush": true, 00:15:25.805 "reset": true, 00:15:25.805 "nvme_admin": false, 00:15:25.805 "nvme_io": false, 00:15:25.805 "nvme_io_md": false, 00:15:25.805 "write_zeroes": true, 00:15:25.805 "zcopy": true, 00:15:25.805 "get_zone_info": false, 00:15:25.805 "zone_management": false, 00:15:25.805 "zone_append": false, 00:15:25.805 "compare": false, 00:15:25.805 "compare_and_write": false, 00:15:25.805 "abort": true, 00:15:25.805 "seek_hole": false, 00:15:25.805 "seek_data": false, 00:15:25.805 "copy": true, 00:15:25.805 "nvme_iov_md": false 00:15:25.805 }, 00:15:25.805 "memory_domains": [ 00:15:25.805 { 00:15:25.805 "dma_device_id": "system", 00:15:25.805 "dma_device_type": 1 00:15:25.805 }, 00:15:25.805 { 00:15:25.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.805 "dma_device_type": 2 00:15:25.805 } 00:15:25.805 ], 00:15:25.805 "driver_specific": {} 00:15:25.806 } 00:15:25.806 ] 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.806 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.080 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.080 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.080 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.080 "name": "Existed_Raid", 00:15:26.080 "uuid": "39063c17-32f2-408b-bee4-8c02032b5130", 00:15:26.080 "strip_size_kb": 0, 00:15:26.080 "state": "online", 00:15:26.080 "raid_level": "raid1", 00:15:26.080 "superblock": true, 00:15:26.080 "num_base_bdevs": 2, 00:15:26.080 "num_base_bdevs_discovered": 2, 00:15:26.080 "num_base_bdevs_operational": 2, 00:15:26.080 "base_bdevs_list": [ 00:15:26.080 { 00:15:26.080 "name": "BaseBdev1", 00:15:26.080 "uuid": "064f77dc-810f-420a-abc0-4984864f2a93", 00:15:26.080 "is_configured": true, 00:15:26.080 "data_offset": 256, 00:15:26.080 "data_size": 7936 00:15:26.080 }, 00:15:26.080 { 00:15:26.080 "name": "BaseBdev2", 00:15:26.080 "uuid": "6c34ceea-1c11-4d6e-8c81-2fb75d9e25db", 00:15:26.080 "is_configured": true, 00:15:26.080 "data_offset": 256, 00:15:26.080 "data_size": 7936 00:15:26.080 } 00:15:26.080 ] 00:15:26.080 }' 00:15:26.080 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.080 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.340 [2024-11-21 05:01:42.957962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.340 "name": "Existed_Raid", 00:15:26.340 "aliases": [ 00:15:26.340 "39063c17-32f2-408b-bee4-8c02032b5130" 00:15:26.340 ], 00:15:26.340 "product_name": "Raid Volume", 00:15:26.340 "block_size": 4096, 00:15:26.340 "num_blocks": 7936, 00:15:26.340 "uuid": "39063c17-32f2-408b-bee4-8c02032b5130", 00:15:26.340 "assigned_rate_limits": { 00:15:26.340 "rw_ios_per_sec": 0, 00:15:26.340 "rw_mbytes_per_sec": 0, 00:15:26.340 "r_mbytes_per_sec": 0, 00:15:26.340 "w_mbytes_per_sec": 0 00:15:26.340 }, 00:15:26.340 "claimed": false, 00:15:26.340 "zoned": false, 00:15:26.340 "supported_io_types": { 00:15:26.340 "read": true, 00:15:26.340 "write": true, 00:15:26.340 "unmap": false, 00:15:26.340 "flush": false, 00:15:26.340 "reset": true, 00:15:26.340 "nvme_admin": false, 00:15:26.340 "nvme_io": false, 00:15:26.340 "nvme_io_md": false, 00:15:26.340 "write_zeroes": true, 00:15:26.340 "zcopy": false, 00:15:26.340 "get_zone_info": false, 00:15:26.340 "zone_management": false, 00:15:26.340 "zone_append": false, 00:15:26.340 "compare": false, 00:15:26.340 "compare_and_write": false, 00:15:26.340 "abort": false, 00:15:26.340 "seek_hole": false, 00:15:26.340 "seek_data": false, 00:15:26.340 "copy": false, 00:15:26.340 "nvme_iov_md": false 00:15:26.340 }, 00:15:26.340 "memory_domains": [ 00:15:26.340 { 00:15:26.340 "dma_device_id": "system", 00:15:26.340 "dma_device_type": 1 00:15:26.340 }, 00:15:26.340 { 00:15:26.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.340 "dma_device_type": 2 00:15:26.340 }, 00:15:26.340 { 00:15:26.340 "dma_device_id": "system", 00:15:26.340 "dma_device_type": 1 00:15:26.340 }, 00:15:26.340 { 00:15:26.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.340 "dma_device_type": 2 00:15:26.340 } 00:15:26.340 ], 00:15:26.340 "driver_specific": { 00:15:26.340 "raid": { 00:15:26.340 "uuid": "39063c17-32f2-408b-bee4-8c02032b5130", 00:15:26.340 "strip_size_kb": 0, 00:15:26.340 "state": "online", 00:15:26.340 "raid_level": "raid1", 00:15:26.340 "superblock": true, 00:15:26.340 "num_base_bdevs": 2, 00:15:26.340 "num_base_bdevs_discovered": 2, 00:15:26.340 "num_base_bdevs_operational": 2, 00:15:26.340 "base_bdevs_list": [ 00:15:26.340 { 00:15:26.340 "name": "BaseBdev1", 00:15:26.340 "uuid": "064f77dc-810f-420a-abc0-4984864f2a93", 00:15:26.340 "is_configured": true, 00:15:26.340 "data_offset": 256, 00:15:26.340 "data_size": 7936 00:15:26.340 }, 00:15:26.340 { 00:15:26.340 "name": "BaseBdev2", 00:15:26.340 "uuid": "6c34ceea-1c11-4d6e-8c81-2fb75d9e25db", 00:15:26.340 "is_configured": true, 00:15:26.340 "data_offset": 256, 00:15:26.340 "data_size": 7936 00:15:26.340 } 00:15:26.340 ] 00:15:26.340 } 00:15:26.340 } 00:15:26.340 }' 00:15:26.340 05:01:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.340 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.340 BaseBdev2' 00:15:26.340 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.600 [2024-11-21 05:01:43.185353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.600 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.601 "name": "Existed_Raid", 00:15:26.601 "uuid": "39063c17-32f2-408b-bee4-8c02032b5130", 00:15:26.601 "strip_size_kb": 0, 00:15:26.601 "state": "online", 00:15:26.601 "raid_level": "raid1", 00:15:26.601 "superblock": true, 00:15:26.601 "num_base_bdevs": 2, 00:15:26.601 "num_base_bdevs_discovered": 1, 00:15:26.601 "num_base_bdevs_operational": 1, 00:15:26.601 "base_bdevs_list": [ 00:15:26.601 { 00:15:26.601 "name": null, 00:15:26.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.601 "is_configured": false, 00:15:26.601 "data_offset": 0, 00:15:26.601 "data_size": 7936 00:15:26.601 }, 00:15:26.601 { 00:15:26.601 "name": "BaseBdev2", 00:15:26.601 "uuid": "6c34ceea-1c11-4d6e-8c81-2fb75d9e25db", 00:15:26.601 "is_configured": true, 00:15:26.601 "data_offset": 256, 00:15:26.601 "data_size": 7936 00:15:26.601 } 00:15:26.601 ] 00:15:26.601 }' 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.601 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.171 [2024-11-21 05:01:43.671775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.171 [2024-11-21 05:01:43.671968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.171 [2024-11-21 05:01:43.683460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.171 [2024-11-21 05:01:43.683529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.171 [2024-11-21 05:01:43.683542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96482 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96482 ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96482 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96482 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96482' 00:15:27.171 killing process with pid 96482 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96482 00:15:27.171 [2024-11-21 05:01:43.779004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.171 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96482 00:15:27.171 [2024-11-21 05:01:43.780005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.431 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:27.431 00:15:27.431 real 0m3.962s 00:15:27.431 user 0m6.228s 00:15:27.431 sys 0m0.880s 00:15:27.432 ************************************ 00:15:27.432 END TEST raid_state_function_test_sb_4k 00:15:27.432 ************************************ 00:15:27.432 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.432 05:01:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 05:01:44 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:27.432 05:01:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:27.432 05:01:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.432 05:01:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 ************************************ 00:15:27.432 START TEST raid_superblock_test_4k 00:15:27.432 ************************************ 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96723 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96723 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 96723 ']' 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.432 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 [2024-11-21 05:01:44.178667] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:27.692 [2024-11-21 05:01:44.178778] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96723 ] 00:15:27.692 [2024-11-21 05:01:44.351334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.692 [2024-11-21 05:01:44.378461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.692 [2024-11-21 05:01:44.422441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.692 [2024-11-21 05:01:44.422477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.634 05:01:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 malloc1 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 [2024-11-21 05:01:45.024682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.634 [2024-11-21 05:01:45.024878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.634 [2024-11-21 05:01:45.024951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:28.634 [2024-11-21 05:01:45.024997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.634 [2024-11-21 05:01:45.027174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.634 [2024-11-21 05:01:45.027283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.634 pt1 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 malloc2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 [2024-11-21 05:01:45.057234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.634 [2024-11-21 05:01:45.057353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.634 [2024-11-21 05:01:45.057386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:28.634 [2024-11-21 05:01:45.057433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.634 [2024-11-21 05:01:45.059512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.634 [2024-11-21 05:01:45.059596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.634 pt2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 [2024-11-21 05:01:45.069265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:28.634 [2024-11-21 05:01:45.071037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.634 [2024-11-21 05:01:45.071182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:28.634 [2024-11-21 05:01:45.071197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.634 [2024-11-21 05:01:45.071483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:28.634 [2024-11-21 05:01:45.071626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:28.634 [2024-11-21 05:01:45.071637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:28.634 [2024-11-21 05:01:45.071754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.634 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.635 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.635 "name": "raid_bdev1", 00:15:28.635 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:28.635 "strip_size_kb": 0, 00:15:28.635 "state": "online", 00:15:28.635 "raid_level": "raid1", 00:15:28.635 "superblock": true, 00:15:28.635 "num_base_bdevs": 2, 00:15:28.635 "num_base_bdevs_discovered": 2, 00:15:28.635 "num_base_bdevs_operational": 2, 00:15:28.635 "base_bdevs_list": [ 00:15:28.635 { 00:15:28.635 "name": "pt1", 00:15:28.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.635 "is_configured": true, 00:15:28.635 "data_offset": 256, 00:15:28.635 "data_size": 7936 00:15:28.635 }, 00:15:28.635 { 00:15:28.635 "name": "pt2", 00:15:28.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.635 "is_configured": true, 00:15:28.635 "data_offset": 256, 00:15:28.635 "data_size": 7936 00:15:28.635 } 00:15:28.635 ] 00:15:28.635 }' 00:15:28.635 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.635 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.895 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:28.895 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:28.895 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.895 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.896 [2024-11-21 05:01:45.514385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.896 "name": "raid_bdev1", 00:15:28.896 "aliases": [ 00:15:28.896 "b4da5c2d-9fc0-44e4-9dec-d22676d8877c" 00:15:28.896 ], 00:15:28.896 "product_name": "Raid Volume", 00:15:28.896 "block_size": 4096, 00:15:28.896 "num_blocks": 7936, 00:15:28.896 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:28.896 "assigned_rate_limits": { 00:15:28.896 "rw_ios_per_sec": 0, 00:15:28.896 "rw_mbytes_per_sec": 0, 00:15:28.896 "r_mbytes_per_sec": 0, 00:15:28.896 "w_mbytes_per_sec": 0 00:15:28.896 }, 00:15:28.896 "claimed": false, 00:15:28.896 "zoned": false, 00:15:28.896 "supported_io_types": { 00:15:28.896 "read": true, 00:15:28.896 "write": true, 00:15:28.896 "unmap": false, 00:15:28.896 "flush": false, 00:15:28.896 "reset": true, 00:15:28.896 "nvme_admin": false, 00:15:28.896 "nvme_io": false, 00:15:28.896 "nvme_io_md": false, 00:15:28.896 "write_zeroes": true, 00:15:28.896 "zcopy": false, 00:15:28.896 "get_zone_info": false, 00:15:28.896 "zone_management": false, 00:15:28.896 "zone_append": false, 00:15:28.896 "compare": false, 00:15:28.896 "compare_and_write": false, 00:15:28.896 "abort": false, 00:15:28.896 "seek_hole": false, 00:15:28.896 "seek_data": false, 00:15:28.896 "copy": false, 00:15:28.896 "nvme_iov_md": false 00:15:28.896 }, 00:15:28.896 "memory_domains": [ 00:15:28.896 { 00:15:28.896 "dma_device_id": "system", 00:15:28.896 "dma_device_type": 1 00:15:28.896 }, 00:15:28.896 { 00:15:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.896 "dma_device_type": 2 00:15:28.896 }, 00:15:28.896 { 00:15:28.896 "dma_device_id": "system", 00:15:28.896 "dma_device_type": 1 00:15:28.896 }, 00:15:28.896 { 00:15:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.896 "dma_device_type": 2 00:15:28.896 } 00:15:28.896 ], 00:15:28.896 "driver_specific": { 00:15:28.896 "raid": { 00:15:28.896 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:28.896 "strip_size_kb": 0, 00:15:28.896 "state": "online", 00:15:28.896 "raid_level": "raid1", 00:15:28.896 "superblock": true, 00:15:28.896 "num_base_bdevs": 2, 00:15:28.896 "num_base_bdevs_discovered": 2, 00:15:28.896 "num_base_bdevs_operational": 2, 00:15:28.896 "base_bdevs_list": [ 00:15:28.896 { 00:15:28.896 "name": "pt1", 00:15:28.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.896 "is_configured": true, 00:15:28.896 "data_offset": 256, 00:15:28.896 "data_size": 7936 00:15:28.896 }, 00:15:28.896 { 00:15:28.896 "name": "pt2", 00:15:28.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.896 "is_configured": true, 00:15:28.896 "data_offset": 256, 00:15:28.896 "data_size": 7936 00:15:28.896 } 00:15:28.896 ] 00:15:28.896 } 00:15:28.896 } 00:15:28.896 }' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.896 pt2' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.896 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:29.157 [2024-11-21 05:01:45.733864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b4da5c2d-9fc0-44e4-9dec-d22676d8877c 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b4da5c2d-9fc0-44e4-9dec-d22676d8877c ']' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 [2024-11-21 05:01:45.781573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.157 [2024-11-21 05:01:45.781635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.157 [2024-11-21 05:01:45.781747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.157 [2024-11-21 05:01:45.781847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.157 [2024-11-21 05:01:45.781892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.157 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.418 [2024-11-21 05:01:45.921364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:29.418 [2024-11-21 05:01:45.923482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:29.418 [2024-11-21 05:01:45.923555] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:29.418 [2024-11-21 05:01:45.923598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:29.418 [2024-11-21 05:01:45.923613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.418 [2024-11-21 05:01:45.923621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:29.418 request: 00:15:29.418 { 00:15:29.418 "name": "raid_bdev1", 00:15:29.418 "raid_level": "raid1", 00:15:29.418 "base_bdevs": [ 00:15:29.418 "malloc1", 00:15:29.418 "malloc2" 00:15:29.418 ], 00:15:29.418 "superblock": false, 00:15:29.418 "method": "bdev_raid_create", 00:15:29.418 "req_id": 1 00:15:29.418 } 00:15:29.418 Got JSON-RPC error response 00:15:29.418 response: 00:15:29.418 { 00:15:29.418 "code": -17, 00:15:29.418 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:29.418 } 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.418 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.419 [2024-11-21 05:01:45.989197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.419 [2024-11-21 05:01:45.989238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.419 [2024-11-21 05:01:45.989258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:29.419 [2024-11-21 05:01:45.989267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.419 [2024-11-21 05:01:45.991682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.419 [2024-11-21 05:01:45.991765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.419 [2024-11-21 05:01:45.991854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.419 [2024-11-21 05:01:45.991901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.419 pt1 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.419 05:01:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.419 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.419 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.419 "name": "raid_bdev1", 00:15:29.419 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:29.419 "strip_size_kb": 0, 00:15:29.419 "state": "configuring", 00:15:29.419 "raid_level": "raid1", 00:15:29.419 "superblock": true, 00:15:29.419 "num_base_bdevs": 2, 00:15:29.419 "num_base_bdevs_discovered": 1, 00:15:29.419 "num_base_bdevs_operational": 2, 00:15:29.419 "base_bdevs_list": [ 00:15:29.419 { 00:15:29.419 "name": "pt1", 00:15:29.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.419 "is_configured": true, 00:15:29.419 "data_offset": 256, 00:15:29.419 "data_size": 7936 00:15:29.419 }, 00:15:29.419 { 00:15:29.419 "name": null, 00:15:29.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.419 "is_configured": false, 00:15:29.419 "data_offset": 256, 00:15:29.419 "data_size": 7936 00:15:29.419 } 00:15:29.419 ] 00:15:29.419 }' 00:15:29.419 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.419 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 [2024-11-21 05:01:46.468379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.990 [2024-11-21 05:01:46.468425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.990 [2024-11-21 05:01:46.468444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:29.990 [2024-11-21 05:01:46.468452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.990 [2024-11-21 05:01:46.468807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.990 [2024-11-21 05:01:46.468823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.990 [2024-11-21 05:01:46.468880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:29.990 [2024-11-21 05:01:46.468903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.990 [2024-11-21 05:01:46.468997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:29.990 [2024-11-21 05:01:46.469007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.990 [2024-11-21 05:01:46.469257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:29.990 [2024-11-21 05:01:46.469383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:29.990 [2024-11-21 05:01:46.469399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:29.990 [2024-11-21 05:01:46.469496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.990 pt2 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.990 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.991 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.991 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.991 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.991 "name": "raid_bdev1", 00:15:29.991 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:29.991 "strip_size_kb": 0, 00:15:29.991 "state": "online", 00:15:29.991 "raid_level": "raid1", 00:15:29.991 "superblock": true, 00:15:29.991 "num_base_bdevs": 2, 00:15:29.991 "num_base_bdevs_discovered": 2, 00:15:29.991 "num_base_bdevs_operational": 2, 00:15:29.991 "base_bdevs_list": [ 00:15:29.991 { 00:15:29.991 "name": "pt1", 00:15:29.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.991 "is_configured": true, 00:15:29.991 "data_offset": 256, 00:15:29.991 "data_size": 7936 00:15:29.991 }, 00:15:29.991 { 00:15:29.991 "name": "pt2", 00:15:29.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.991 "is_configured": true, 00:15:29.991 "data_offset": 256, 00:15:29.991 "data_size": 7936 00:15:29.991 } 00:15:29.991 ] 00:15:29.991 }' 00:15:29.991 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.991 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 [2024-11-21 05:01:46.927820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.251 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.251 "name": "raid_bdev1", 00:15:30.251 "aliases": [ 00:15:30.251 "b4da5c2d-9fc0-44e4-9dec-d22676d8877c" 00:15:30.251 ], 00:15:30.251 "product_name": "Raid Volume", 00:15:30.251 "block_size": 4096, 00:15:30.251 "num_blocks": 7936, 00:15:30.251 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:30.251 "assigned_rate_limits": { 00:15:30.251 "rw_ios_per_sec": 0, 00:15:30.251 "rw_mbytes_per_sec": 0, 00:15:30.251 "r_mbytes_per_sec": 0, 00:15:30.251 "w_mbytes_per_sec": 0 00:15:30.251 }, 00:15:30.251 "claimed": false, 00:15:30.251 "zoned": false, 00:15:30.251 "supported_io_types": { 00:15:30.251 "read": true, 00:15:30.251 "write": true, 00:15:30.251 "unmap": false, 00:15:30.251 "flush": false, 00:15:30.251 "reset": true, 00:15:30.251 "nvme_admin": false, 00:15:30.251 "nvme_io": false, 00:15:30.251 "nvme_io_md": false, 00:15:30.251 "write_zeroes": true, 00:15:30.251 "zcopy": false, 00:15:30.251 "get_zone_info": false, 00:15:30.251 "zone_management": false, 00:15:30.251 "zone_append": false, 00:15:30.251 "compare": false, 00:15:30.251 "compare_and_write": false, 00:15:30.251 "abort": false, 00:15:30.251 "seek_hole": false, 00:15:30.251 "seek_data": false, 00:15:30.251 "copy": false, 00:15:30.251 "nvme_iov_md": false 00:15:30.251 }, 00:15:30.251 "memory_domains": [ 00:15:30.251 { 00:15:30.251 "dma_device_id": "system", 00:15:30.251 "dma_device_type": 1 00:15:30.251 }, 00:15:30.251 { 00:15:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.251 "dma_device_type": 2 00:15:30.251 }, 00:15:30.251 { 00:15:30.251 "dma_device_id": "system", 00:15:30.251 "dma_device_type": 1 00:15:30.251 }, 00:15:30.251 { 00:15:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.251 "dma_device_type": 2 00:15:30.251 } 00:15:30.251 ], 00:15:30.251 "driver_specific": { 00:15:30.251 "raid": { 00:15:30.251 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:30.251 "strip_size_kb": 0, 00:15:30.251 "state": "online", 00:15:30.251 "raid_level": "raid1", 00:15:30.251 "superblock": true, 00:15:30.251 "num_base_bdevs": 2, 00:15:30.251 "num_base_bdevs_discovered": 2, 00:15:30.251 "num_base_bdevs_operational": 2, 00:15:30.251 "base_bdevs_list": [ 00:15:30.252 { 00:15:30.252 "name": "pt1", 00:15:30.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:30.252 "is_configured": true, 00:15:30.252 "data_offset": 256, 00:15:30.252 "data_size": 7936 00:15:30.252 }, 00:15:30.252 { 00:15:30.252 "name": "pt2", 00:15:30.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.252 "is_configured": true, 00:15:30.252 "data_offset": 256, 00:15:30.252 "data_size": 7936 00:15:30.252 } 00:15:30.252 ] 00:15:30.252 } 00:15:30.252 } 00:15:30.252 }' 00:15:30.252 05:01:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:30.513 pt2' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:30.513 [2024-11-21 05:01:47.155465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b4da5c2d-9fc0-44e4-9dec-d22676d8877c '!=' b4da5c2d-9fc0-44e4-9dec-d22676d8877c ']' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.513 [2024-11-21 05:01:47.207220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.513 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.773 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.773 "name": "raid_bdev1", 00:15:30.773 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:30.773 "strip_size_kb": 0, 00:15:30.773 "state": "online", 00:15:30.773 "raid_level": "raid1", 00:15:30.773 "superblock": true, 00:15:30.773 "num_base_bdevs": 2, 00:15:30.773 "num_base_bdevs_discovered": 1, 00:15:30.773 "num_base_bdevs_operational": 1, 00:15:30.773 "base_bdevs_list": [ 00:15:30.773 { 00:15:30.773 "name": null, 00:15:30.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.774 "is_configured": false, 00:15:30.774 "data_offset": 0, 00:15:30.774 "data_size": 7936 00:15:30.774 }, 00:15:30.774 { 00:15:30.774 "name": "pt2", 00:15:30.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.774 "is_configured": true, 00:15:30.774 "data_offset": 256, 00:15:30.774 "data_size": 7936 00:15:30.774 } 00:15:30.774 ] 00:15:30.774 }' 00:15:30.774 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.774 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 [2024-11-21 05:01:47.638396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.035 [2024-11-21 05:01:47.638461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.035 [2024-11-21 05:01:47.638518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.035 [2024-11-21 05:01:47.638564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.035 [2024-11-21 05:01:47.638582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 [2024-11-21 05:01:47.710283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.035 [2024-11-21 05:01:47.710323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.035 [2024-11-21 05:01:47.710340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:31.035 [2024-11-21 05:01:47.710347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.035 [2024-11-21 05:01:47.712738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.035 [2024-11-21 05:01:47.712770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.035 [2024-11-21 05:01:47.712837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:31.035 [2024-11-21 05:01:47.712864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.035 [2024-11-21 05:01:47.712934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:31.035 [2024-11-21 05:01:47.712942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.035 [2024-11-21 05:01:47.713148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:31.035 [2024-11-21 05:01:47.713285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:31.035 [2024-11-21 05:01:47.713298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:31.035 [2024-11-21 05:01:47.713394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.035 pt2 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.035 "name": "raid_bdev1", 00:15:31.035 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:31.035 "strip_size_kb": 0, 00:15:31.035 "state": "online", 00:15:31.035 "raid_level": "raid1", 00:15:31.035 "superblock": true, 00:15:31.035 "num_base_bdevs": 2, 00:15:31.035 "num_base_bdevs_discovered": 1, 00:15:31.035 "num_base_bdevs_operational": 1, 00:15:31.035 "base_bdevs_list": [ 00:15:31.035 { 00:15:31.035 "name": null, 00:15:31.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.035 "is_configured": false, 00:15:31.035 "data_offset": 256, 00:15:31.035 "data_size": 7936 00:15:31.035 }, 00:15:31.035 { 00:15:31.035 "name": "pt2", 00:15:31.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.035 "is_configured": true, 00:15:31.035 "data_offset": 256, 00:15:31.035 "data_size": 7936 00:15:31.035 } 00:15:31.035 ] 00:15:31.035 }' 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.035 05:01:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 [2024-11-21 05:01:48.165531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.606 [2024-11-21 05:01:48.165592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.606 [2024-11-21 05:01:48.165680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.606 [2024-11-21 05:01:48.165751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.606 [2024-11-21 05:01:48.165795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 [2024-11-21 05:01:48.229403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.606 [2024-11-21 05:01:48.229482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.606 [2024-11-21 05:01:48.229515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:31.606 [2024-11-21 05:01:48.229570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.606 [2024-11-21 05:01:48.231958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.606 [2024-11-21 05:01:48.232027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.606 [2024-11-21 05:01:48.232115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:31.606 [2024-11-21 05:01:48.232169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.606 [2024-11-21 05:01:48.232278] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:31.606 [2024-11-21 05:01:48.232338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.606 [2024-11-21 05:01:48.232418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:31.606 [2024-11-21 05:01:48.232494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.606 [2024-11-21 05:01:48.232593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:31.606 [2024-11-21 05:01:48.232632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.606 [2024-11-21 05:01:48.232853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.606 [2024-11-21 05:01:48.233000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:31.606 [2024-11-21 05:01:48.233038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:31.606 [2024-11-21 05:01:48.233190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.606 pt1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.606 "name": "raid_bdev1", 00:15:31.606 "uuid": "b4da5c2d-9fc0-44e4-9dec-d22676d8877c", 00:15:31.606 "strip_size_kb": 0, 00:15:31.606 "state": "online", 00:15:31.606 "raid_level": "raid1", 00:15:31.606 "superblock": true, 00:15:31.606 "num_base_bdevs": 2, 00:15:31.606 "num_base_bdevs_discovered": 1, 00:15:31.606 "num_base_bdevs_operational": 1, 00:15:31.606 "base_bdevs_list": [ 00:15:31.606 { 00:15:31.606 "name": null, 00:15:31.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.606 "is_configured": false, 00:15:31.606 "data_offset": 256, 00:15:31.606 "data_size": 7936 00:15:31.606 }, 00:15:31.606 { 00:15:31.606 "name": "pt2", 00:15:31.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.606 "is_configured": true, 00:15:31.606 "data_offset": 256, 00:15:31.606 "data_size": 7936 00:15:31.606 } 00:15:31.606 ] 00:15:31.606 }' 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.606 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.176 [2024-11-21 05:01:48.732750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b4da5c2d-9fc0-44e4-9dec-d22676d8877c '!=' b4da5c2d-9fc0-44e4-9dec-d22676d8877c ']' 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96723 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 96723 ']' 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 96723 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:15:32.176 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96723 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.177 killing process with pid 96723 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96723' 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 96723 00:15:32.177 [2024-11-21 05:01:48.812238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.177 [2024-11-21 05:01:48.812308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.177 [2024-11-21 05:01:48.812358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.177 [2024-11-21 05:01:48.812366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:32.177 05:01:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 96723 00:15:32.177 [2024-11-21 05:01:48.854832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.746 ************************************ 00:15:32.746 END TEST raid_superblock_test_4k 00:15:32.746 ************************************ 00:15:32.746 05:01:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:32.746 00:15:32.746 real 0m5.099s 00:15:32.746 user 0m8.208s 00:15:32.746 sys 0m1.116s 00:15:32.746 05:01:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.746 05:01:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 05:01:49 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:32.746 05:01:49 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:32.746 05:01:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:32.746 05:01:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.746 05:01:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 ************************************ 00:15:32.746 START TEST raid_rebuild_test_sb_4k 00:15:32.746 ************************************ 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97040 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97040 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 97040 ']' 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.746 05:01:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 [2024-11-21 05:01:49.362011] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:32.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.746 Zero copy mechanism will not be used. 00:15:32.746 [2024-11-21 05:01:49.362253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97040 ] 00:15:33.005 [2024-11-21 05:01:49.534006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.005 [2024-11-21 05:01:49.559659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.005 [2024-11-21 05:01:49.602008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.005 [2024-11-21 05:01:49.602045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.575 BaseBdev1_malloc 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.575 [2024-11-21 05:01:50.200191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.575 [2024-11-21 05:01:50.200260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.575 [2024-11-21 05:01:50.200286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.575 [2024-11-21 05:01:50.200297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.575 [2024-11-21 05:01:50.202466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.575 [2024-11-21 05:01:50.202502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.575 BaseBdev1 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.575 BaseBdev2_malloc 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.575 [2024-11-21 05:01:50.228884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.575 [2024-11-21 05:01:50.228952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.575 [2024-11-21 05:01:50.228974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.575 [2024-11-21 05:01:50.228983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.575 [2024-11-21 05:01:50.230988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.575 [2024-11-21 05:01:50.231020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.575 BaseBdev2 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:33.575 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 spare_malloc 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 spare_delay 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 [2024-11-21 05:01:50.269580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.576 [2024-11-21 05:01:50.269634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.576 [2024-11-21 05:01:50.269670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:33.576 [2024-11-21 05:01:50.269679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.576 [2024-11-21 05:01:50.271659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.576 [2024-11-21 05:01:50.271783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.576 spare 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 [2024-11-21 05:01:50.281591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.576 [2024-11-21 05:01:50.283426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.576 [2024-11-21 05:01:50.283604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:33.576 [2024-11-21 05:01:50.283618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.576 [2024-11-21 05:01:50.283876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:33.576 [2024-11-21 05:01:50.284001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:33.576 [2024-11-21 05:01:50.284012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:33.576 [2024-11-21 05:01:50.284141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.576 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.835 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.835 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.835 "name": "raid_bdev1", 00:15:33.835 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:33.835 "strip_size_kb": 0, 00:15:33.835 "state": "online", 00:15:33.835 "raid_level": "raid1", 00:15:33.835 "superblock": true, 00:15:33.835 "num_base_bdevs": 2, 00:15:33.835 "num_base_bdevs_discovered": 2, 00:15:33.835 "num_base_bdevs_operational": 2, 00:15:33.835 "base_bdevs_list": [ 00:15:33.835 { 00:15:33.835 "name": "BaseBdev1", 00:15:33.835 "uuid": "0adf85a8-74b5-582b-9df9-ade0f4eb8206", 00:15:33.835 "is_configured": true, 00:15:33.835 "data_offset": 256, 00:15:33.835 "data_size": 7936 00:15:33.835 }, 00:15:33.835 { 00:15:33.835 "name": "BaseBdev2", 00:15:33.835 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:33.835 "is_configured": true, 00:15:33.835 "data_offset": 256, 00:15:33.835 "data_size": 7936 00:15:33.835 } 00:15:33.835 ] 00:15:33.835 }' 00:15:33.835 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.835 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.095 [2024-11-21 05:01:50.753015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.095 05:01:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.355 [2024-11-21 05:01:51.004362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:34.355 /dev/nbd0 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.355 1+0 records in 00:15:34.355 1+0 records out 00:15:34.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422566 s, 9.7 MB/s 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:34.355 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:35.295 7936+0 records in 00:15:35.295 7936+0 records out 00:15:35.295 32505856 bytes (33 MB, 31 MiB) copied, 0.628486 s, 51.7 MB/s 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.295 [2024-11-21 05:01:51.924371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.295 [2024-11-21 05:01:51.940477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.295 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.296 "name": "raid_bdev1", 00:15:35.296 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:35.296 "strip_size_kb": 0, 00:15:35.296 "state": "online", 00:15:35.296 "raid_level": "raid1", 00:15:35.296 "superblock": true, 00:15:35.296 "num_base_bdevs": 2, 00:15:35.296 "num_base_bdevs_discovered": 1, 00:15:35.296 "num_base_bdevs_operational": 1, 00:15:35.296 "base_bdevs_list": [ 00:15:35.296 { 00:15:35.296 "name": null, 00:15:35.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.296 "is_configured": false, 00:15:35.296 "data_offset": 0, 00:15:35.296 "data_size": 7936 00:15:35.296 }, 00:15:35.296 { 00:15:35.296 "name": "BaseBdev2", 00:15:35.296 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:35.296 "is_configured": true, 00:15:35.296 "data_offset": 256, 00:15:35.296 "data_size": 7936 00:15:35.296 } 00:15:35.296 ] 00:15:35.296 }' 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.296 05:01:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.865 05:01:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.865 05:01:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.865 05:01:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.865 [2024-11-21 05:01:52.387665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.865 [2024-11-21 05:01:52.421012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:35.865 05:01:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.865 05:01:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.865 [2024-11-21 05:01:52.428658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.801 "name": "raid_bdev1", 00:15:36.801 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:36.801 "strip_size_kb": 0, 00:15:36.801 "state": "online", 00:15:36.801 "raid_level": "raid1", 00:15:36.801 "superblock": true, 00:15:36.801 "num_base_bdevs": 2, 00:15:36.801 "num_base_bdevs_discovered": 2, 00:15:36.801 "num_base_bdevs_operational": 2, 00:15:36.801 "process": { 00:15:36.801 "type": "rebuild", 00:15:36.801 "target": "spare", 00:15:36.801 "progress": { 00:15:36.801 "blocks": 2560, 00:15:36.801 "percent": 32 00:15:36.801 } 00:15:36.801 }, 00:15:36.801 "base_bdevs_list": [ 00:15:36.801 { 00:15:36.801 "name": "spare", 00:15:36.801 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:36.801 "is_configured": true, 00:15:36.801 "data_offset": 256, 00:15:36.801 "data_size": 7936 00:15:36.801 }, 00:15:36.801 { 00:15:36.801 "name": "BaseBdev2", 00:15:36.801 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:36.801 "is_configured": true, 00:15:36.801 "data_offset": 256, 00:15:36.801 "data_size": 7936 00:15:36.801 } 00:15:36.801 ] 00:15:36.801 }' 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.801 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.060 [2024-11-21 05:01:53.576429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.060 [2024-11-21 05:01:53.637976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.060 [2024-11-21 05:01:53.638057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.060 [2024-11-21 05:01:53.638082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.060 [2024-11-21 05:01:53.638108] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.060 "name": "raid_bdev1", 00:15:37.060 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:37.060 "strip_size_kb": 0, 00:15:37.060 "state": "online", 00:15:37.060 "raid_level": "raid1", 00:15:37.060 "superblock": true, 00:15:37.060 "num_base_bdevs": 2, 00:15:37.060 "num_base_bdevs_discovered": 1, 00:15:37.060 "num_base_bdevs_operational": 1, 00:15:37.060 "base_bdevs_list": [ 00:15:37.060 { 00:15:37.060 "name": null, 00:15:37.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.060 "is_configured": false, 00:15:37.060 "data_offset": 0, 00:15:37.060 "data_size": 7936 00:15:37.060 }, 00:15:37.060 { 00:15:37.060 "name": "BaseBdev2", 00:15:37.060 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:37.060 "is_configured": true, 00:15:37.060 "data_offset": 256, 00:15:37.060 "data_size": 7936 00:15:37.060 } 00:15:37.060 ] 00:15:37.060 }' 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.060 05:01:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.628 "name": "raid_bdev1", 00:15:37.628 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:37.628 "strip_size_kb": 0, 00:15:37.628 "state": "online", 00:15:37.628 "raid_level": "raid1", 00:15:37.628 "superblock": true, 00:15:37.628 "num_base_bdevs": 2, 00:15:37.628 "num_base_bdevs_discovered": 1, 00:15:37.628 "num_base_bdevs_operational": 1, 00:15:37.628 "base_bdevs_list": [ 00:15:37.628 { 00:15:37.628 "name": null, 00:15:37.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.628 "is_configured": false, 00:15:37.628 "data_offset": 0, 00:15:37.628 "data_size": 7936 00:15:37.628 }, 00:15:37.628 { 00:15:37.628 "name": "BaseBdev2", 00:15:37.628 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:37.628 "is_configured": true, 00:15:37.628 "data_offset": 256, 00:15:37.628 "data_size": 7936 00:15:37.628 } 00:15:37.628 ] 00:15:37.628 }' 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.628 [2024-11-21 05:01:54.284991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.628 [2024-11-21 05:01:54.292365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.628 05:01:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.628 [2024-11-21 05:01:54.294585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.007 "name": "raid_bdev1", 00:15:39.007 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:39.007 "strip_size_kb": 0, 00:15:39.007 "state": "online", 00:15:39.007 "raid_level": "raid1", 00:15:39.007 "superblock": true, 00:15:39.007 "num_base_bdevs": 2, 00:15:39.007 "num_base_bdevs_discovered": 2, 00:15:39.007 "num_base_bdevs_operational": 2, 00:15:39.007 "process": { 00:15:39.007 "type": "rebuild", 00:15:39.007 "target": "spare", 00:15:39.007 "progress": { 00:15:39.007 "blocks": 2560, 00:15:39.007 "percent": 32 00:15:39.007 } 00:15:39.007 }, 00:15:39.007 "base_bdevs_list": [ 00:15:39.007 { 00:15:39.007 "name": "spare", 00:15:39.007 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 256, 00:15:39.007 "data_size": 7936 00:15:39.007 }, 00:15:39.007 { 00:15:39.007 "name": "BaseBdev2", 00:15:39.007 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 256, 00:15:39.007 "data_size": 7936 00:15:39.007 } 00:15:39.007 ] 00:15:39.007 }' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:39.007 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=567 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.007 "name": "raid_bdev1", 00:15:39.007 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:39.007 "strip_size_kb": 0, 00:15:39.007 "state": "online", 00:15:39.007 "raid_level": "raid1", 00:15:39.007 "superblock": true, 00:15:39.007 "num_base_bdevs": 2, 00:15:39.007 "num_base_bdevs_discovered": 2, 00:15:39.007 "num_base_bdevs_operational": 2, 00:15:39.007 "process": { 00:15:39.007 "type": "rebuild", 00:15:39.007 "target": "spare", 00:15:39.007 "progress": { 00:15:39.007 "blocks": 2816, 00:15:39.007 "percent": 35 00:15:39.007 } 00:15:39.007 }, 00:15:39.007 "base_bdevs_list": [ 00:15:39.007 { 00:15:39.007 "name": "spare", 00:15:39.007 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 256, 00:15:39.007 "data_size": 7936 00:15:39.007 }, 00:15:39.007 { 00:15:39.007 "name": "BaseBdev2", 00:15:39.007 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 256, 00:15:39.007 "data_size": 7936 00:15:39.007 } 00:15:39.007 ] 00:15:39.007 }' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.007 05:01:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.947 "name": "raid_bdev1", 00:15:39.947 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:39.947 "strip_size_kb": 0, 00:15:39.947 "state": "online", 00:15:39.947 "raid_level": "raid1", 00:15:39.947 "superblock": true, 00:15:39.947 "num_base_bdevs": 2, 00:15:39.947 "num_base_bdevs_discovered": 2, 00:15:39.947 "num_base_bdevs_operational": 2, 00:15:39.947 "process": { 00:15:39.947 "type": "rebuild", 00:15:39.947 "target": "spare", 00:15:39.947 "progress": { 00:15:39.947 "blocks": 5632, 00:15:39.947 "percent": 70 00:15:39.947 } 00:15:39.947 }, 00:15:39.947 "base_bdevs_list": [ 00:15:39.947 { 00:15:39.947 "name": "spare", 00:15:39.947 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:39.947 "is_configured": true, 00:15:39.947 "data_offset": 256, 00:15:39.947 "data_size": 7936 00:15:39.947 }, 00:15:39.947 { 00:15:39.947 "name": "BaseBdev2", 00:15:39.947 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:39.947 "is_configured": true, 00:15:39.947 "data_offset": 256, 00:15:39.947 "data_size": 7936 00:15:39.947 } 00:15:39.947 ] 00:15:39.947 }' 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.947 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.207 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.207 05:01:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.777 [2024-11-21 05:01:57.414740] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:40.777 [2024-11-21 05:01:57.414894] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:40.777 [2024-11-21 05:01:57.415057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.037 "name": "raid_bdev1", 00:15:41.037 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:41.037 "strip_size_kb": 0, 00:15:41.037 "state": "online", 00:15:41.037 "raid_level": "raid1", 00:15:41.037 "superblock": true, 00:15:41.037 "num_base_bdevs": 2, 00:15:41.037 "num_base_bdevs_discovered": 2, 00:15:41.037 "num_base_bdevs_operational": 2, 00:15:41.037 "base_bdevs_list": [ 00:15:41.037 { 00:15:41.037 "name": "spare", 00:15:41.037 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:41.037 "is_configured": true, 00:15:41.037 "data_offset": 256, 00:15:41.037 "data_size": 7936 00:15:41.037 }, 00:15:41.037 { 00:15:41.037 "name": "BaseBdev2", 00:15:41.037 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:41.037 "is_configured": true, 00:15:41.037 "data_offset": 256, 00:15:41.037 "data_size": 7936 00:15:41.037 } 00:15:41.037 ] 00:15:41.037 }' 00:15:41.037 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.297 "name": "raid_bdev1", 00:15:41.297 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:41.297 "strip_size_kb": 0, 00:15:41.297 "state": "online", 00:15:41.297 "raid_level": "raid1", 00:15:41.297 "superblock": true, 00:15:41.297 "num_base_bdevs": 2, 00:15:41.297 "num_base_bdevs_discovered": 2, 00:15:41.297 "num_base_bdevs_operational": 2, 00:15:41.297 "base_bdevs_list": [ 00:15:41.297 { 00:15:41.297 "name": "spare", 00:15:41.297 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:41.297 "is_configured": true, 00:15:41.297 "data_offset": 256, 00:15:41.297 "data_size": 7936 00:15:41.297 }, 00:15:41.297 { 00:15:41.297 "name": "BaseBdev2", 00:15:41.297 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:41.297 "is_configured": true, 00:15:41.297 "data_offset": 256, 00:15:41.297 "data_size": 7936 00:15:41.297 } 00:15:41.297 ] 00:15:41.297 }' 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.297 05:01:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.297 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.557 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.557 "name": "raid_bdev1", 00:15:41.557 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:41.557 "strip_size_kb": 0, 00:15:41.557 "state": "online", 00:15:41.557 "raid_level": "raid1", 00:15:41.557 "superblock": true, 00:15:41.557 "num_base_bdevs": 2, 00:15:41.557 "num_base_bdevs_discovered": 2, 00:15:41.557 "num_base_bdevs_operational": 2, 00:15:41.557 "base_bdevs_list": [ 00:15:41.557 { 00:15:41.557 "name": "spare", 00:15:41.557 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:41.557 "is_configured": true, 00:15:41.557 "data_offset": 256, 00:15:41.557 "data_size": 7936 00:15:41.557 }, 00:15:41.557 { 00:15:41.557 "name": "BaseBdev2", 00:15:41.557 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:41.557 "is_configured": true, 00:15:41.557 "data_offset": 256, 00:15:41.557 "data_size": 7936 00:15:41.557 } 00:15:41.557 ] 00:15:41.557 }' 00:15:41.557 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.557 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.817 [2024-11-21 05:01:58.468869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.817 [2024-11-21 05:01:58.468947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.817 [2024-11-21 05:01:58.469122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.817 [2024-11-21 05:01:58.469258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.817 [2024-11-21 05:01:58.469313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.817 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:42.077 /dev/nbd0 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.077 1+0 records in 00:15:42.077 1+0 records out 00:15:42.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390794 s, 10.5 MB/s 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.077 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:42.336 /dev/nbd1 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.336 1+0 records in 00:15:42.336 1+0 records out 00:15:42.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397185 s, 10.3 MB/s 00:15:42.336 05:01:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.336 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.594 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.855 [2024-11-21 05:01:59.519084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:42.855 [2024-11-21 05:01:59.519155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.855 [2024-11-21 05:01:59.519174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:42.855 [2024-11-21 05:01:59.519186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.855 [2024-11-21 05:01:59.521327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.855 [2024-11-21 05:01:59.521369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:42.855 [2024-11-21 05:01:59.521446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:42.855 [2024-11-21 05:01:59.521496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.855 [2024-11-21 05:01:59.521610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.855 spare 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.855 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.125 [2024-11-21 05:01:59.621505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:43.125 [2024-11-21 05:01:59.621530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:43.125 [2024-11-21 05:01:59.621791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:43.125 [2024-11-21 05:01:59.621923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:43.125 [2024-11-21 05:01:59.621935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:43.125 [2024-11-21 05:01:59.622075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.125 "name": "raid_bdev1", 00:15:43.125 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:43.125 "strip_size_kb": 0, 00:15:43.125 "state": "online", 00:15:43.125 "raid_level": "raid1", 00:15:43.125 "superblock": true, 00:15:43.125 "num_base_bdevs": 2, 00:15:43.125 "num_base_bdevs_discovered": 2, 00:15:43.125 "num_base_bdevs_operational": 2, 00:15:43.125 "base_bdevs_list": [ 00:15:43.125 { 00:15:43.125 "name": "spare", 00:15:43.125 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:43.125 "is_configured": true, 00:15:43.125 "data_offset": 256, 00:15:43.125 "data_size": 7936 00:15:43.125 }, 00:15:43.125 { 00:15:43.125 "name": "BaseBdev2", 00:15:43.125 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:43.125 "is_configured": true, 00:15:43.125 "data_offset": 256, 00:15:43.125 "data_size": 7936 00:15:43.125 } 00:15:43.125 ] 00:15:43.125 }' 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.125 05:01:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.401 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.660 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.660 "name": "raid_bdev1", 00:15:43.660 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:43.660 "strip_size_kb": 0, 00:15:43.660 "state": "online", 00:15:43.660 "raid_level": "raid1", 00:15:43.660 "superblock": true, 00:15:43.660 "num_base_bdevs": 2, 00:15:43.661 "num_base_bdevs_discovered": 2, 00:15:43.661 "num_base_bdevs_operational": 2, 00:15:43.661 "base_bdevs_list": [ 00:15:43.661 { 00:15:43.661 "name": "spare", 00:15:43.661 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:43.661 "is_configured": true, 00:15:43.661 "data_offset": 256, 00:15:43.661 "data_size": 7936 00:15:43.661 }, 00:15:43.661 { 00:15:43.661 "name": "BaseBdev2", 00:15:43.661 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:43.661 "is_configured": true, 00:15:43.661 "data_offset": 256, 00:15:43.661 "data_size": 7936 00:15:43.661 } 00:15:43.661 ] 00:15:43.661 }' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.661 [2024-11-21 05:02:00.293784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.661 "name": "raid_bdev1", 00:15:43.661 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:43.661 "strip_size_kb": 0, 00:15:43.661 "state": "online", 00:15:43.661 "raid_level": "raid1", 00:15:43.661 "superblock": true, 00:15:43.661 "num_base_bdevs": 2, 00:15:43.661 "num_base_bdevs_discovered": 1, 00:15:43.661 "num_base_bdevs_operational": 1, 00:15:43.661 "base_bdevs_list": [ 00:15:43.661 { 00:15:43.661 "name": null, 00:15:43.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.661 "is_configured": false, 00:15:43.661 "data_offset": 0, 00:15:43.661 "data_size": 7936 00:15:43.661 }, 00:15:43.661 { 00:15:43.661 "name": "BaseBdev2", 00:15:43.661 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:43.661 "is_configured": true, 00:15:43.661 "data_offset": 256, 00:15:43.661 "data_size": 7936 00:15:43.661 } 00:15:43.661 ] 00:15:43.661 }' 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.661 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.231 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.231 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.231 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.231 [2024-11-21 05:02:00.761011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.231 [2024-11-21 05:02:00.761271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.231 [2024-11-21 05:02:00.761333] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.231 [2024-11-21 05:02:00.761405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.231 [2024-11-21 05:02:00.766296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:44.231 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.231 05:02:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:44.231 [2024-11-21 05:02:00.768264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.170 "name": "raid_bdev1", 00:15:45.170 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:45.170 "strip_size_kb": 0, 00:15:45.170 "state": "online", 00:15:45.170 "raid_level": "raid1", 00:15:45.170 "superblock": true, 00:15:45.170 "num_base_bdevs": 2, 00:15:45.170 "num_base_bdevs_discovered": 2, 00:15:45.170 "num_base_bdevs_operational": 2, 00:15:45.170 "process": { 00:15:45.170 "type": "rebuild", 00:15:45.170 "target": "spare", 00:15:45.170 "progress": { 00:15:45.170 "blocks": 2560, 00:15:45.170 "percent": 32 00:15:45.170 } 00:15:45.170 }, 00:15:45.170 "base_bdevs_list": [ 00:15:45.170 { 00:15:45.170 "name": "spare", 00:15:45.170 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:45.170 "is_configured": true, 00:15:45.170 "data_offset": 256, 00:15:45.170 "data_size": 7936 00:15:45.170 }, 00:15:45.170 { 00:15:45.170 "name": "BaseBdev2", 00:15:45.170 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:45.170 "is_configured": true, 00:15:45.170 "data_offset": 256, 00:15:45.170 "data_size": 7936 00:15:45.170 } 00:15:45.170 ] 00:15:45.170 }' 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.170 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.430 [2024-11-21 05:02:01.932759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.430 [2024-11-21 05:02:01.973144] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.430 [2024-11-21 05:02:01.973195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.430 [2024-11-21 05:02:01.973211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.430 [2024-11-21 05:02:01.973218] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.430 05:02:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.430 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.430 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.430 "name": "raid_bdev1", 00:15:45.430 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:45.430 "strip_size_kb": 0, 00:15:45.430 "state": "online", 00:15:45.430 "raid_level": "raid1", 00:15:45.430 "superblock": true, 00:15:45.430 "num_base_bdevs": 2, 00:15:45.430 "num_base_bdevs_discovered": 1, 00:15:45.430 "num_base_bdevs_operational": 1, 00:15:45.430 "base_bdevs_list": [ 00:15:45.430 { 00:15:45.430 "name": null, 00:15:45.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.430 "is_configured": false, 00:15:45.430 "data_offset": 0, 00:15:45.430 "data_size": 7936 00:15:45.430 }, 00:15:45.430 { 00:15:45.430 "name": "BaseBdev2", 00:15:45.430 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:45.430 "is_configured": true, 00:15:45.430 "data_offset": 256, 00:15:45.430 "data_size": 7936 00:15:45.430 } 00:15:45.430 ] 00:15:45.430 }' 00:15:45.430 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.430 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.000 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.000 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.000 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.000 [2024-11-21 05:02:02.444896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.000 [2024-11-21 05:02:02.444992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.000 [2024-11-21 05:02:02.445031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:46.000 [2024-11-21 05:02:02.445058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.000 [2024-11-21 05:02:02.445550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.000 [2024-11-21 05:02:02.445614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.000 [2024-11-21 05:02:02.445744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.000 [2024-11-21 05:02:02.445785] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.000 [2024-11-21 05:02:02.445849] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.000 [2024-11-21 05:02:02.445916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.000 [2024-11-21 05:02:02.450716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:46.000 spare 00:15:46.000 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.000 05:02:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:46.000 [2024-11-21 05:02:02.452632] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.939 "name": "raid_bdev1", 00:15:46.939 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:46.939 "strip_size_kb": 0, 00:15:46.939 "state": "online", 00:15:46.939 "raid_level": "raid1", 00:15:46.939 "superblock": true, 00:15:46.939 "num_base_bdevs": 2, 00:15:46.939 "num_base_bdevs_discovered": 2, 00:15:46.939 "num_base_bdevs_operational": 2, 00:15:46.939 "process": { 00:15:46.939 "type": "rebuild", 00:15:46.939 "target": "spare", 00:15:46.939 "progress": { 00:15:46.939 "blocks": 2560, 00:15:46.939 "percent": 32 00:15:46.939 } 00:15:46.939 }, 00:15:46.939 "base_bdevs_list": [ 00:15:46.939 { 00:15:46.939 "name": "spare", 00:15:46.939 "uuid": "fdb71899-4e48-57dc-ae5f-5ee48f4cc8ca", 00:15:46.939 "is_configured": true, 00:15:46.939 "data_offset": 256, 00:15:46.939 "data_size": 7936 00:15:46.939 }, 00:15:46.939 { 00:15:46.939 "name": "BaseBdev2", 00:15:46.939 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:46.939 "is_configured": true, 00:15:46.939 "data_offset": 256, 00:15:46.939 "data_size": 7936 00:15:46.939 } 00:15:46.939 ] 00:15:46.939 }' 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.939 [2024-11-21 05:02:03.588918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.939 [2024-11-21 05:02:03.656739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.939 [2024-11-21 05:02:03.656852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.939 [2024-11-21 05:02:03.656868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.939 [2024-11-21 05:02:03.656877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.939 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.200 "name": "raid_bdev1", 00:15:47.200 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:47.200 "strip_size_kb": 0, 00:15:47.200 "state": "online", 00:15:47.200 "raid_level": "raid1", 00:15:47.200 "superblock": true, 00:15:47.200 "num_base_bdevs": 2, 00:15:47.200 "num_base_bdevs_discovered": 1, 00:15:47.200 "num_base_bdevs_operational": 1, 00:15:47.200 "base_bdevs_list": [ 00:15:47.200 { 00:15:47.200 "name": null, 00:15:47.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.200 "is_configured": false, 00:15:47.200 "data_offset": 0, 00:15:47.200 "data_size": 7936 00:15:47.200 }, 00:15:47.200 { 00:15:47.200 "name": "BaseBdev2", 00:15:47.200 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:47.200 "is_configured": true, 00:15:47.200 "data_offset": 256, 00:15:47.200 "data_size": 7936 00:15:47.200 } 00:15:47.200 ] 00:15:47.200 }' 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.200 05:02:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.462 "name": "raid_bdev1", 00:15:47.462 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:47.462 "strip_size_kb": 0, 00:15:47.462 "state": "online", 00:15:47.462 "raid_level": "raid1", 00:15:47.462 "superblock": true, 00:15:47.462 "num_base_bdevs": 2, 00:15:47.462 "num_base_bdevs_discovered": 1, 00:15:47.462 "num_base_bdevs_operational": 1, 00:15:47.462 "base_bdevs_list": [ 00:15:47.462 { 00:15:47.462 "name": null, 00:15:47.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.462 "is_configured": false, 00:15:47.462 "data_offset": 0, 00:15:47.462 "data_size": 7936 00:15:47.462 }, 00:15:47.462 { 00:15:47.462 "name": "BaseBdev2", 00:15:47.462 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:47.462 "is_configured": true, 00:15:47.462 "data_offset": 256, 00:15:47.462 "data_size": 7936 00:15:47.462 } 00:15:47.462 ] 00:15:47.462 }' 00:15:47.462 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.724 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:47.725 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.725 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.725 [2024-11-21 05:02:04.272507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:47.725 [2024-11-21 05:02:04.272603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.725 [2024-11-21 05:02:04.272640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:47.725 [2024-11-21 05:02:04.272669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.725 [2024-11-21 05:02:04.273123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.725 [2024-11-21 05:02:04.273188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:47.725 [2024-11-21 05:02:04.273299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:47.725 [2024-11-21 05:02:04.273349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.725 [2024-11-21 05:02:04.273412] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:47.725 [2024-11-21 05:02:04.273451] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:47.725 BaseBdev1 00:15:47.725 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.725 05:02:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.665 "name": "raid_bdev1", 00:15:48.665 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:48.665 "strip_size_kb": 0, 00:15:48.665 "state": "online", 00:15:48.665 "raid_level": "raid1", 00:15:48.665 "superblock": true, 00:15:48.665 "num_base_bdevs": 2, 00:15:48.665 "num_base_bdevs_discovered": 1, 00:15:48.665 "num_base_bdevs_operational": 1, 00:15:48.665 "base_bdevs_list": [ 00:15:48.665 { 00:15:48.665 "name": null, 00:15:48.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.665 "is_configured": false, 00:15:48.665 "data_offset": 0, 00:15:48.665 "data_size": 7936 00:15:48.665 }, 00:15:48.665 { 00:15:48.665 "name": "BaseBdev2", 00:15:48.665 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:48.665 "is_configured": true, 00:15:48.665 "data_offset": 256, 00:15:48.665 "data_size": 7936 00:15:48.665 } 00:15:48.665 ] 00:15:48.665 }' 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.665 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.236 "name": "raid_bdev1", 00:15:49.236 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:49.236 "strip_size_kb": 0, 00:15:49.236 "state": "online", 00:15:49.236 "raid_level": "raid1", 00:15:49.236 "superblock": true, 00:15:49.236 "num_base_bdevs": 2, 00:15:49.236 "num_base_bdevs_discovered": 1, 00:15:49.236 "num_base_bdevs_operational": 1, 00:15:49.236 "base_bdevs_list": [ 00:15:49.236 { 00:15:49.236 "name": null, 00:15:49.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.236 "is_configured": false, 00:15:49.236 "data_offset": 0, 00:15:49.236 "data_size": 7936 00:15:49.236 }, 00:15:49.236 { 00:15:49.236 "name": "BaseBdev2", 00:15:49.236 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:49.236 "is_configured": true, 00:15:49.236 "data_offset": 256, 00:15:49.236 "data_size": 7936 00:15:49.236 } 00:15:49.236 ] 00:15:49.236 }' 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 [2024-11-21 05:02:05.869823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.236 [2024-11-21 05:02:05.870022] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.236 [2024-11-21 05:02:05.870077] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:49.236 request: 00:15:49.236 { 00:15:49.236 "base_bdev": "BaseBdev1", 00:15:49.236 "raid_bdev": "raid_bdev1", 00:15:49.236 "method": "bdev_raid_add_base_bdev", 00:15:49.236 "req_id": 1 00:15:49.236 } 00:15:49.236 Got JSON-RPC error response 00:15:49.236 response: 00:15:49.236 { 00:15:49.236 "code": -22, 00:15:49.236 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:49.236 } 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.236 05:02:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.175 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.435 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.435 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.435 "name": "raid_bdev1", 00:15:50.435 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:50.435 "strip_size_kb": 0, 00:15:50.435 "state": "online", 00:15:50.435 "raid_level": "raid1", 00:15:50.435 "superblock": true, 00:15:50.435 "num_base_bdevs": 2, 00:15:50.435 "num_base_bdevs_discovered": 1, 00:15:50.435 "num_base_bdevs_operational": 1, 00:15:50.435 "base_bdevs_list": [ 00:15:50.435 { 00:15:50.435 "name": null, 00:15:50.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.435 "is_configured": false, 00:15:50.435 "data_offset": 0, 00:15:50.435 "data_size": 7936 00:15:50.435 }, 00:15:50.435 { 00:15:50.435 "name": "BaseBdev2", 00:15:50.435 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:50.435 "is_configured": true, 00:15:50.435 "data_offset": 256, 00:15:50.435 "data_size": 7936 00:15:50.435 } 00:15:50.435 ] 00:15:50.435 }' 00:15:50.435 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.435 05:02:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.696 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.696 "name": "raid_bdev1", 00:15:50.696 "uuid": "28944e92-2b06-48de-a161-87c8a45b6bdd", 00:15:50.696 "strip_size_kb": 0, 00:15:50.696 "state": "online", 00:15:50.696 "raid_level": "raid1", 00:15:50.696 "superblock": true, 00:15:50.696 "num_base_bdevs": 2, 00:15:50.696 "num_base_bdevs_discovered": 1, 00:15:50.696 "num_base_bdevs_operational": 1, 00:15:50.696 "base_bdevs_list": [ 00:15:50.696 { 00:15:50.696 "name": null, 00:15:50.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.696 "is_configured": false, 00:15:50.696 "data_offset": 0, 00:15:50.696 "data_size": 7936 00:15:50.696 }, 00:15:50.696 { 00:15:50.696 "name": "BaseBdev2", 00:15:50.696 "uuid": "a9d367fe-fda3-5074-ab40-4ebe60bd13c8", 00:15:50.696 "is_configured": true, 00:15:50.696 "data_offset": 256, 00:15:50.696 "data_size": 7936 00:15:50.696 } 00:15:50.696 ] 00:15:50.696 }' 00:15:50.697 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97040 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 97040 ']' 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 97040 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97040 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97040' 00:15:50.957 killing process with pid 97040 00:15:50.957 Received shutdown signal, test time was about 60.000000 seconds 00:15:50.957 00:15:50.957 Latency(us) 00:15:50.957 [2024-11-21T05:02:07.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.957 [2024-11-21T05:02:07.692Z] =================================================================================================================== 00:15:50.957 [2024-11-21T05:02:07.692Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 97040 00:15:50.957 [2024-11-21 05:02:07.535244] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.957 [2024-11-21 05:02:07.535358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.957 [2024-11-21 05:02:07.535412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.957 [2024-11-21 05:02:07.535421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:50.957 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 97040 00:15:50.957 [2024-11-21 05:02:07.568085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.218 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:51.218 00:15:51.218 real 0m18.507s 00:15:51.218 user 0m24.611s 00:15:51.218 sys 0m2.659s 00:15:51.218 ************************************ 00:15:51.218 END TEST raid_rebuild_test_sb_4k 00:15:51.218 ************************************ 00:15:51.218 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.218 05:02:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.218 05:02:07 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:51.218 05:02:07 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:51.218 05:02:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:51.218 05:02:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.218 05:02:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.218 ************************************ 00:15:51.218 START TEST raid_state_function_test_sb_md_separate 00:15:51.218 ************************************ 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97714 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97714' 00:15:51.218 Process raid pid: 97714 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97714 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97714 ']' 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.218 05:02:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.218 [2024-11-21 05:02:07.948507] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:51.218 [2024-11-21 05:02:07.948648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.478 [2024-11-21 05:02:08.118192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.478 [2024-11-21 05:02:08.143152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.478 [2024-11-21 05:02:08.185941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.478 [2024-11-21 05:02:08.185977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.047 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.047 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:52.047 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:52.047 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.047 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.047 [2024-11-21 05:02:08.779552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.047 [2024-11-21 05:02:08.779608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.047 [2024-11-21 05:02:08.779618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.047 [2024-11-21 05:02:08.779630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.307 "name": "Existed_Raid", 00:15:52.307 "uuid": "c1a3b6c8-14b7-402d-8172-6498345ae8d7", 00:15:52.307 "strip_size_kb": 0, 00:15:52.307 "state": "configuring", 00:15:52.307 "raid_level": "raid1", 00:15:52.307 "superblock": true, 00:15:52.307 "num_base_bdevs": 2, 00:15:52.307 "num_base_bdevs_discovered": 0, 00:15:52.307 "num_base_bdevs_operational": 2, 00:15:52.307 "base_bdevs_list": [ 00:15:52.307 { 00:15:52.307 "name": "BaseBdev1", 00:15:52.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.307 "is_configured": false, 00:15:52.307 "data_offset": 0, 00:15:52.307 "data_size": 0 00:15:52.307 }, 00:15:52.307 { 00:15:52.307 "name": "BaseBdev2", 00:15:52.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.307 "is_configured": false, 00:15:52.307 "data_offset": 0, 00:15:52.307 "data_size": 0 00:15:52.307 } 00:15:52.307 ] 00:15:52.307 }' 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.307 05:02:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.566 [2024-11-21 05:02:09.234748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.566 [2024-11-21 05:02:09.234851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.566 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.566 [2024-11-21 05:02:09.246736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.566 [2024-11-21 05:02:09.246832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.566 [2024-11-21 05:02:09.246858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.566 [2024-11-21 05:02:09.246880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.567 [2024-11-21 05:02:09.268214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.567 BaseBdev1 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.567 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.567 [ 00:15:52.567 { 00:15:52.567 "name": "BaseBdev1", 00:15:52.567 "aliases": [ 00:15:52.567 "128c9467-9167-4321-a625-db589e5f68c9" 00:15:52.567 ], 00:15:52.567 "product_name": "Malloc disk", 00:15:52.567 "block_size": 4096, 00:15:52.567 "num_blocks": 8192, 00:15:52.567 "uuid": "128c9467-9167-4321-a625-db589e5f68c9", 00:15:52.567 "md_size": 32, 00:15:52.567 "md_interleave": false, 00:15:52.567 "dif_type": 0, 00:15:52.567 "assigned_rate_limits": { 00:15:52.567 "rw_ios_per_sec": 0, 00:15:52.567 "rw_mbytes_per_sec": 0, 00:15:52.567 "r_mbytes_per_sec": 0, 00:15:52.567 "w_mbytes_per_sec": 0 00:15:52.567 }, 00:15:52.567 "claimed": true, 00:15:52.567 "claim_type": "exclusive_write", 00:15:52.567 "zoned": false, 00:15:52.567 "supported_io_types": { 00:15:52.567 "read": true, 00:15:52.567 "write": true, 00:15:52.567 "unmap": true, 00:15:52.567 "flush": true, 00:15:52.567 "reset": true, 00:15:52.567 "nvme_admin": false, 00:15:52.567 "nvme_io": false, 00:15:52.567 "nvme_io_md": false, 00:15:52.826 "write_zeroes": true, 00:15:52.826 "zcopy": true, 00:15:52.826 "get_zone_info": false, 00:15:52.826 "zone_management": false, 00:15:52.826 "zone_append": false, 00:15:52.826 "compare": false, 00:15:52.826 "compare_and_write": false, 00:15:52.826 "abort": true, 00:15:52.826 "seek_hole": false, 00:15:52.827 "seek_data": false, 00:15:52.827 "copy": true, 00:15:52.827 "nvme_iov_md": false 00:15:52.827 }, 00:15:52.827 "memory_domains": [ 00:15:52.827 { 00:15:52.827 "dma_device_id": "system", 00:15:52.827 "dma_device_type": 1 00:15:52.827 }, 00:15:52.827 { 00:15:52.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.827 "dma_device_type": 2 00:15:52.827 } 00:15:52.827 ], 00:15:52.827 "driver_specific": {} 00:15:52.827 } 00:15:52.827 ] 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.827 "name": "Existed_Raid", 00:15:52.827 "uuid": "b9f8ddd2-6467-4a3d-a992-1d675a2c2f7e", 00:15:52.827 "strip_size_kb": 0, 00:15:52.827 "state": "configuring", 00:15:52.827 "raid_level": "raid1", 00:15:52.827 "superblock": true, 00:15:52.827 "num_base_bdevs": 2, 00:15:52.827 "num_base_bdevs_discovered": 1, 00:15:52.827 "num_base_bdevs_operational": 2, 00:15:52.827 "base_bdevs_list": [ 00:15:52.827 { 00:15:52.827 "name": "BaseBdev1", 00:15:52.827 "uuid": "128c9467-9167-4321-a625-db589e5f68c9", 00:15:52.827 "is_configured": true, 00:15:52.827 "data_offset": 256, 00:15:52.827 "data_size": 7936 00:15:52.827 }, 00:15:52.827 { 00:15:52.827 "name": "BaseBdev2", 00:15:52.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.827 "is_configured": false, 00:15:52.827 "data_offset": 0, 00:15:52.827 "data_size": 0 00:15:52.827 } 00:15:52.827 ] 00:15:52.827 }' 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.827 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 [2024-11-21 05:02:09.703599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.086 [2024-11-21 05:02:09.703693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 [2024-11-21 05:02:09.715611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.086 [2024-11-21 05:02:09.717449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.086 [2024-11-21 05:02:09.717521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.086 "name": "Existed_Raid", 00:15:53.086 "uuid": "53eaa5df-7636-41e8-b765-cb1f0b910ff2", 00:15:53.086 "strip_size_kb": 0, 00:15:53.086 "state": "configuring", 00:15:53.086 "raid_level": "raid1", 00:15:53.086 "superblock": true, 00:15:53.086 "num_base_bdevs": 2, 00:15:53.086 "num_base_bdevs_discovered": 1, 00:15:53.086 "num_base_bdevs_operational": 2, 00:15:53.086 "base_bdevs_list": [ 00:15:53.086 { 00:15:53.086 "name": "BaseBdev1", 00:15:53.086 "uuid": "128c9467-9167-4321-a625-db589e5f68c9", 00:15:53.086 "is_configured": true, 00:15:53.086 "data_offset": 256, 00:15:53.086 "data_size": 7936 00:15:53.086 }, 00:15:53.086 { 00:15:53.086 "name": "BaseBdev2", 00:15:53.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.086 "is_configured": false, 00:15:53.086 "data_offset": 0, 00:15:53.086 "data_size": 0 00:15:53.086 } 00:15:53.086 ] 00:15:53.086 }' 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.086 05:02:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.656 [2024-11-21 05:02:10.182336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.656 [2024-11-21 05:02:10.182605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:53.656 [2024-11-21 05:02:10.182657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.656 [2024-11-21 05:02:10.182795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:53.656 [2024-11-21 05:02:10.182937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:53.656 [2024-11-21 05:02:10.182988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:53.656 BaseBdev2 00:15:53.656 [2024-11-21 05:02:10.183174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.656 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.657 [ 00:15:53.657 { 00:15:53.657 "name": "BaseBdev2", 00:15:53.657 "aliases": [ 00:15:53.657 "909d57a3-d0bf-42af-9de5-77dc084b6a27" 00:15:53.657 ], 00:15:53.657 "product_name": "Malloc disk", 00:15:53.657 "block_size": 4096, 00:15:53.657 "num_blocks": 8192, 00:15:53.657 "uuid": "909d57a3-d0bf-42af-9de5-77dc084b6a27", 00:15:53.657 "md_size": 32, 00:15:53.657 "md_interleave": false, 00:15:53.657 "dif_type": 0, 00:15:53.657 "assigned_rate_limits": { 00:15:53.657 "rw_ios_per_sec": 0, 00:15:53.657 "rw_mbytes_per_sec": 0, 00:15:53.657 "r_mbytes_per_sec": 0, 00:15:53.657 "w_mbytes_per_sec": 0 00:15:53.657 }, 00:15:53.657 "claimed": true, 00:15:53.657 "claim_type": "exclusive_write", 00:15:53.657 "zoned": false, 00:15:53.657 "supported_io_types": { 00:15:53.657 "read": true, 00:15:53.657 "write": true, 00:15:53.657 "unmap": true, 00:15:53.657 "flush": true, 00:15:53.657 "reset": true, 00:15:53.657 "nvme_admin": false, 00:15:53.657 "nvme_io": false, 00:15:53.657 "nvme_io_md": false, 00:15:53.657 "write_zeroes": true, 00:15:53.657 "zcopy": true, 00:15:53.657 "get_zone_info": false, 00:15:53.657 "zone_management": false, 00:15:53.657 "zone_append": false, 00:15:53.657 "compare": false, 00:15:53.657 "compare_and_write": false, 00:15:53.657 "abort": true, 00:15:53.657 "seek_hole": false, 00:15:53.657 "seek_data": false, 00:15:53.657 "copy": true, 00:15:53.657 "nvme_iov_md": false 00:15:53.657 }, 00:15:53.657 "memory_domains": [ 00:15:53.657 { 00:15:53.657 "dma_device_id": "system", 00:15:53.657 "dma_device_type": 1 00:15:53.657 }, 00:15:53.657 { 00:15:53.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.657 "dma_device_type": 2 00:15:53.657 } 00:15:53.657 ], 00:15:53.657 "driver_specific": {} 00:15:53.657 } 00:15:53.657 ] 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.657 "name": "Existed_Raid", 00:15:53.657 "uuid": "53eaa5df-7636-41e8-b765-cb1f0b910ff2", 00:15:53.657 "strip_size_kb": 0, 00:15:53.657 "state": "online", 00:15:53.657 "raid_level": "raid1", 00:15:53.657 "superblock": true, 00:15:53.657 "num_base_bdevs": 2, 00:15:53.657 "num_base_bdevs_discovered": 2, 00:15:53.657 "num_base_bdevs_operational": 2, 00:15:53.657 "base_bdevs_list": [ 00:15:53.657 { 00:15:53.657 "name": "BaseBdev1", 00:15:53.657 "uuid": "128c9467-9167-4321-a625-db589e5f68c9", 00:15:53.657 "is_configured": true, 00:15:53.657 "data_offset": 256, 00:15:53.657 "data_size": 7936 00:15:53.657 }, 00:15:53.657 { 00:15:53.657 "name": "BaseBdev2", 00:15:53.657 "uuid": "909d57a3-d0bf-42af-9de5-77dc084b6a27", 00:15:53.657 "is_configured": true, 00:15:53.657 "data_offset": 256, 00:15:53.657 "data_size": 7936 00:15:53.657 } 00:15:53.657 ] 00:15:53.657 }' 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.657 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.917 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.917 [2024-11-21 05:02:10.645858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.178 "name": "Existed_Raid", 00:15:54.178 "aliases": [ 00:15:54.178 "53eaa5df-7636-41e8-b765-cb1f0b910ff2" 00:15:54.178 ], 00:15:54.178 "product_name": "Raid Volume", 00:15:54.178 "block_size": 4096, 00:15:54.178 "num_blocks": 7936, 00:15:54.178 "uuid": "53eaa5df-7636-41e8-b765-cb1f0b910ff2", 00:15:54.178 "md_size": 32, 00:15:54.178 "md_interleave": false, 00:15:54.178 "dif_type": 0, 00:15:54.178 "assigned_rate_limits": { 00:15:54.178 "rw_ios_per_sec": 0, 00:15:54.178 "rw_mbytes_per_sec": 0, 00:15:54.178 "r_mbytes_per_sec": 0, 00:15:54.178 "w_mbytes_per_sec": 0 00:15:54.178 }, 00:15:54.178 "claimed": false, 00:15:54.178 "zoned": false, 00:15:54.178 "supported_io_types": { 00:15:54.178 "read": true, 00:15:54.178 "write": true, 00:15:54.178 "unmap": false, 00:15:54.178 "flush": false, 00:15:54.178 "reset": true, 00:15:54.178 "nvme_admin": false, 00:15:54.178 "nvme_io": false, 00:15:54.178 "nvme_io_md": false, 00:15:54.178 "write_zeroes": true, 00:15:54.178 "zcopy": false, 00:15:54.178 "get_zone_info": false, 00:15:54.178 "zone_management": false, 00:15:54.178 "zone_append": false, 00:15:54.178 "compare": false, 00:15:54.178 "compare_and_write": false, 00:15:54.178 "abort": false, 00:15:54.178 "seek_hole": false, 00:15:54.178 "seek_data": false, 00:15:54.178 "copy": false, 00:15:54.178 "nvme_iov_md": false 00:15:54.178 }, 00:15:54.178 "memory_domains": [ 00:15:54.178 { 00:15:54.178 "dma_device_id": "system", 00:15:54.178 "dma_device_type": 1 00:15:54.178 }, 00:15:54.178 { 00:15:54.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.178 "dma_device_type": 2 00:15:54.178 }, 00:15:54.178 { 00:15:54.178 "dma_device_id": "system", 00:15:54.178 "dma_device_type": 1 00:15:54.178 }, 00:15:54.178 { 00:15:54.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.178 "dma_device_type": 2 00:15:54.178 } 00:15:54.178 ], 00:15:54.178 "driver_specific": { 00:15:54.178 "raid": { 00:15:54.178 "uuid": "53eaa5df-7636-41e8-b765-cb1f0b910ff2", 00:15:54.178 "strip_size_kb": 0, 00:15:54.178 "state": "online", 00:15:54.178 "raid_level": "raid1", 00:15:54.178 "superblock": true, 00:15:54.178 "num_base_bdevs": 2, 00:15:54.178 "num_base_bdevs_discovered": 2, 00:15:54.178 "num_base_bdevs_operational": 2, 00:15:54.178 "base_bdevs_list": [ 00:15:54.178 { 00:15:54.178 "name": "BaseBdev1", 00:15:54.178 "uuid": "128c9467-9167-4321-a625-db589e5f68c9", 00:15:54.178 "is_configured": true, 00:15:54.178 "data_offset": 256, 00:15:54.178 "data_size": 7936 00:15:54.178 }, 00:15:54.178 { 00:15:54.178 "name": "BaseBdev2", 00:15:54.178 "uuid": "909d57a3-d0bf-42af-9de5-77dc084b6a27", 00:15:54.178 "is_configured": true, 00:15:54.178 "data_offset": 256, 00:15:54.178 "data_size": 7936 00:15:54.178 } 00:15:54.178 ] 00:15:54.178 } 00:15:54.178 } 00:15:54.178 }' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:54.178 BaseBdev2' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.178 [2024-11-21 05:02:10.889275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.178 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.179 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.438 "name": "Existed_Raid", 00:15:54.438 "uuid": "53eaa5df-7636-41e8-b765-cb1f0b910ff2", 00:15:54.438 "strip_size_kb": 0, 00:15:54.438 "state": "online", 00:15:54.438 "raid_level": "raid1", 00:15:54.438 "superblock": true, 00:15:54.438 "num_base_bdevs": 2, 00:15:54.438 "num_base_bdevs_discovered": 1, 00:15:54.438 "num_base_bdevs_operational": 1, 00:15:54.438 "base_bdevs_list": [ 00:15:54.438 { 00:15:54.438 "name": null, 00:15:54.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.438 "is_configured": false, 00:15:54.438 "data_offset": 0, 00:15:54.438 "data_size": 7936 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "name": "BaseBdev2", 00:15:54.438 "uuid": "909d57a3-d0bf-42af-9de5-77dc084b6a27", 00:15:54.438 "is_configured": true, 00:15:54.438 "data_offset": 256, 00:15:54.438 "data_size": 7936 00:15:54.438 } 00:15:54.438 ] 00:15:54.438 }' 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.438 05:02:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.698 [2024-11-21 05:02:11.408666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.698 [2024-11-21 05:02:11.408822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.698 [2024-11-21 05:02:11.421576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.698 [2024-11-21 05:02:11.421624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.698 [2024-11-21 05:02:11.421642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.698 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97714 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97714 ']' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97714 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97714 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.959 killing process with pid 97714 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97714' 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97714 00:15:54.959 [2024-11-21 05:02:11.524785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.959 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97714 00:15:54.959 [2024-11-21 05:02:11.525840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.219 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:55.219 00:15:55.219 real 0m3.894s 00:15:55.219 user 0m6.084s 00:15:55.219 sys 0m0.882s 00:15:55.219 ************************************ 00:15:55.219 END TEST raid_state_function_test_sb_md_separate 00:15:55.219 ************************************ 00:15:55.219 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.219 05:02:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.219 05:02:11 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:55.219 05:02:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:55.219 05:02:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.219 05:02:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.219 ************************************ 00:15:55.219 START TEST raid_superblock_test_md_separate 00:15:55.219 ************************************ 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97956 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97956 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97956 ']' 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.220 05:02:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.220 [2024-11-21 05:02:11.922129] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:55.220 [2024-11-21 05:02:11.922366] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97956 ] 00:15:55.485 [2024-11-21 05:02:12.093610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.485 [2024-11-21 05:02:12.119082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.485 [2024-11-21 05:02:12.161793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.485 [2024-11-21 05:02:12.161924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.056 malloc1 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.056 [2024-11-21 05:02:12.772780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.056 [2024-11-21 05:02:12.772928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.056 [2024-11-21 05:02:12.772973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.056 [2024-11-21 05:02:12.773004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.056 [2024-11-21 05:02:12.774926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.056 [2024-11-21 05:02:12.775002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.056 pt1 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:56.056 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.057 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.316 malloc2 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.316 [2024-11-21 05:02:12.806117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.316 [2024-11-21 05:02:12.806172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.316 [2024-11-21 05:02:12.806205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.316 [2024-11-21 05:02:12.806214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.316 [2024-11-21 05:02:12.808030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.316 [2024-11-21 05:02:12.808142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.316 pt2 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.316 [2024-11-21 05:02:12.818122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.316 [2024-11-21 05:02:12.819872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.316 [2024-11-21 05:02:12.820014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:56.316 [2024-11-21 05:02:12.820031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:56.316 [2024-11-21 05:02:12.820130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:56.316 [2024-11-21 05:02:12.820239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:56.316 [2024-11-21 05:02:12.820253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:56.316 [2024-11-21 05:02:12.820373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.316 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.316 "name": "raid_bdev1", 00:15:56.316 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:56.316 "strip_size_kb": 0, 00:15:56.316 "state": "online", 00:15:56.316 "raid_level": "raid1", 00:15:56.316 "superblock": true, 00:15:56.316 "num_base_bdevs": 2, 00:15:56.316 "num_base_bdevs_discovered": 2, 00:15:56.316 "num_base_bdevs_operational": 2, 00:15:56.316 "base_bdevs_list": [ 00:15:56.316 { 00:15:56.316 "name": "pt1", 00:15:56.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.316 "is_configured": true, 00:15:56.316 "data_offset": 256, 00:15:56.317 "data_size": 7936 00:15:56.317 }, 00:15:56.317 { 00:15:56.317 "name": "pt2", 00:15:56.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.317 "is_configured": true, 00:15:56.317 "data_offset": 256, 00:15:56.317 "data_size": 7936 00:15:56.317 } 00:15:56.317 ] 00:15:56.317 }' 00:15:56.317 05:02:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.317 05:02:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.576 [2024-11-21 05:02:13.245636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.576 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.576 "name": "raid_bdev1", 00:15:56.576 "aliases": [ 00:15:56.576 "17062c27-a445-4abf-87ec-9eb0095f3fe2" 00:15:56.576 ], 00:15:56.576 "product_name": "Raid Volume", 00:15:56.576 "block_size": 4096, 00:15:56.576 "num_blocks": 7936, 00:15:56.576 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:56.576 "md_size": 32, 00:15:56.576 "md_interleave": false, 00:15:56.576 "dif_type": 0, 00:15:56.576 "assigned_rate_limits": { 00:15:56.576 "rw_ios_per_sec": 0, 00:15:56.576 "rw_mbytes_per_sec": 0, 00:15:56.576 "r_mbytes_per_sec": 0, 00:15:56.576 "w_mbytes_per_sec": 0 00:15:56.576 }, 00:15:56.576 "claimed": false, 00:15:56.576 "zoned": false, 00:15:56.576 "supported_io_types": { 00:15:56.576 "read": true, 00:15:56.576 "write": true, 00:15:56.576 "unmap": false, 00:15:56.576 "flush": false, 00:15:56.576 "reset": true, 00:15:56.576 "nvme_admin": false, 00:15:56.576 "nvme_io": false, 00:15:56.576 "nvme_io_md": false, 00:15:56.576 "write_zeroes": true, 00:15:56.576 "zcopy": false, 00:15:56.576 "get_zone_info": false, 00:15:56.576 "zone_management": false, 00:15:56.576 "zone_append": false, 00:15:56.576 "compare": false, 00:15:56.576 "compare_and_write": false, 00:15:56.576 "abort": false, 00:15:56.576 "seek_hole": false, 00:15:56.576 "seek_data": false, 00:15:56.576 "copy": false, 00:15:56.577 "nvme_iov_md": false 00:15:56.577 }, 00:15:56.577 "memory_domains": [ 00:15:56.577 { 00:15:56.577 "dma_device_id": "system", 00:15:56.577 "dma_device_type": 1 00:15:56.577 }, 00:15:56.577 { 00:15:56.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.577 "dma_device_type": 2 00:15:56.577 }, 00:15:56.577 { 00:15:56.577 "dma_device_id": "system", 00:15:56.577 "dma_device_type": 1 00:15:56.577 }, 00:15:56.577 { 00:15:56.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.577 "dma_device_type": 2 00:15:56.577 } 00:15:56.577 ], 00:15:56.577 "driver_specific": { 00:15:56.577 "raid": { 00:15:56.577 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:56.577 "strip_size_kb": 0, 00:15:56.577 "state": "online", 00:15:56.577 "raid_level": "raid1", 00:15:56.577 "superblock": true, 00:15:56.577 "num_base_bdevs": 2, 00:15:56.577 "num_base_bdevs_discovered": 2, 00:15:56.577 "num_base_bdevs_operational": 2, 00:15:56.577 "base_bdevs_list": [ 00:15:56.577 { 00:15:56.577 "name": "pt1", 00:15:56.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.577 "is_configured": true, 00:15:56.577 "data_offset": 256, 00:15:56.577 "data_size": 7936 00:15:56.577 }, 00:15:56.577 { 00:15:56.577 "name": "pt2", 00:15:56.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.577 "is_configured": true, 00:15:56.577 "data_offset": 256, 00:15:56.577 "data_size": 7936 00:15:56.577 } 00:15:56.577 ] 00:15:56.577 } 00:15:56.577 } 00:15:56.577 }' 00:15:56.577 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.836 pt2' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:56.836 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.837 [2024-11-21 05:02:13.461228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17062c27-a445-4abf-87ec-9eb0095f3fe2 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 17062c27-a445-4abf-87ec-9eb0095f3fe2 ']' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.837 [2024-11-21 05:02:13.508894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.837 [2024-11-21 05:02:13.508924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.837 [2024-11-21 05:02:13.508997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.837 [2024-11-21 05:02:13.509044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.837 [2024-11-21 05:02:13.509052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.837 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 [2024-11-21 05:02:13.652668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:57.097 [2024-11-21 05:02:13.654542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:57.097 [2024-11-21 05:02:13.654605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:57.097 [2024-11-21 05:02:13.654645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:57.097 [2024-11-21 05:02:13.654660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.097 [2024-11-21 05:02:13.654669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:57.097 request: 00:15:57.097 { 00:15:57.097 "name": "raid_bdev1", 00:15:57.097 "raid_level": "raid1", 00:15:57.097 "base_bdevs": [ 00:15:57.097 "malloc1", 00:15:57.097 "malloc2" 00:15:57.097 ], 00:15:57.097 "superblock": false, 00:15:57.097 "method": "bdev_raid_create", 00:15:57.097 "req_id": 1 00:15:57.097 } 00:15:57.097 Got JSON-RPC error response 00:15:57.097 response: 00:15:57.097 { 00:15:57.097 "code": -17, 00:15:57.097 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:57.097 } 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 [2024-11-21 05:02:13.720502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.097 [2024-11-21 05:02:13.720621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.097 [2024-11-21 05:02:13.720656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:57.097 [2024-11-21 05:02:13.720684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.097 [2024-11-21 05:02:13.722560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.097 [2024-11-21 05:02:13.722636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.097 [2024-11-21 05:02:13.722718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.097 [2024-11-21 05:02:13.722767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.098 pt1 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.098 "name": "raid_bdev1", 00:15:57.098 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:57.098 "strip_size_kb": 0, 00:15:57.098 "state": "configuring", 00:15:57.098 "raid_level": "raid1", 00:15:57.098 "superblock": true, 00:15:57.098 "num_base_bdevs": 2, 00:15:57.098 "num_base_bdevs_discovered": 1, 00:15:57.098 "num_base_bdevs_operational": 2, 00:15:57.098 "base_bdevs_list": [ 00:15:57.098 { 00:15:57.098 "name": "pt1", 00:15:57.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.098 "is_configured": true, 00:15:57.098 "data_offset": 256, 00:15:57.098 "data_size": 7936 00:15:57.098 }, 00:15:57.098 { 00:15:57.098 "name": null, 00:15:57.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.098 "is_configured": false, 00:15:57.098 "data_offset": 256, 00:15:57.098 "data_size": 7936 00:15:57.098 } 00:15:57.098 ] 00:15:57.098 }' 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.098 05:02:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.667 [2024-11-21 05:02:14.147841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.667 [2024-11-21 05:02:14.147897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.667 [2024-11-21 05:02:14.147918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:57.667 [2024-11-21 05:02:14.147926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.667 [2024-11-21 05:02:14.148110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.667 [2024-11-21 05:02:14.148125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.667 [2024-11-21 05:02:14.148170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.667 [2024-11-21 05:02:14.148186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.667 [2024-11-21 05:02:14.148262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:57.667 [2024-11-21 05:02:14.148270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:57.667 [2024-11-21 05:02:14.148335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:57.667 [2024-11-21 05:02:14.148436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:57.667 [2024-11-21 05:02:14.148448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:57.667 [2024-11-21 05:02:14.148527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.667 pt2 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.667 "name": "raid_bdev1", 00:15:57.667 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:57.667 "strip_size_kb": 0, 00:15:57.667 "state": "online", 00:15:57.667 "raid_level": "raid1", 00:15:57.667 "superblock": true, 00:15:57.667 "num_base_bdevs": 2, 00:15:57.667 "num_base_bdevs_discovered": 2, 00:15:57.667 "num_base_bdevs_operational": 2, 00:15:57.667 "base_bdevs_list": [ 00:15:57.667 { 00:15:57.667 "name": "pt1", 00:15:57.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.667 "is_configured": true, 00:15:57.667 "data_offset": 256, 00:15:57.667 "data_size": 7936 00:15:57.667 }, 00:15:57.667 { 00:15:57.667 "name": "pt2", 00:15:57.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.667 "is_configured": true, 00:15:57.667 "data_offset": 256, 00:15:57.667 "data_size": 7936 00:15:57.667 } 00:15:57.667 ] 00:15:57.667 }' 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.667 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.927 [2024-11-21 05:02:14.575435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.927 "name": "raid_bdev1", 00:15:57.927 "aliases": [ 00:15:57.927 "17062c27-a445-4abf-87ec-9eb0095f3fe2" 00:15:57.927 ], 00:15:57.927 "product_name": "Raid Volume", 00:15:57.927 "block_size": 4096, 00:15:57.927 "num_blocks": 7936, 00:15:57.927 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:57.927 "md_size": 32, 00:15:57.927 "md_interleave": false, 00:15:57.927 "dif_type": 0, 00:15:57.927 "assigned_rate_limits": { 00:15:57.927 "rw_ios_per_sec": 0, 00:15:57.927 "rw_mbytes_per_sec": 0, 00:15:57.927 "r_mbytes_per_sec": 0, 00:15:57.927 "w_mbytes_per_sec": 0 00:15:57.927 }, 00:15:57.927 "claimed": false, 00:15:57.927 "zoned": false, 00:15:57.927 "supported_io_types": { 00:15:57.927 "read": true, 00:15:57.927 "write": true, 00:15:57.927 "unmap": false, 00:15:57.927 "flush": false, 00:15:57.927 "reset": true, 00:15:57.927 "nvme_admin": false, 00:15:57.927 "nvme_io": false, 00:15:57.927 "nvme_io_md": false, 00:15:57.927 "write_zeroes": true, 00:15:57.927 "zcopy": false, 00:15:57.927 "get_zone_info": false, 00:15:57.927 "zone_management": false, 00:15:57.927 "zone_append": false, 00:15:57.927 "compare": false, 00:15:57.927 "compare_and_write": false, 00:15:57.927 "abort": false, 00:15:57.927 "seek_hole": false, 00:15:57.927 "seek_data": false, 00:15:57.927 "copy": false, 00:15:57.927 "nvme_iov_md": false 00:15:57.927 }, 00:15:57.927 "memory_domains": [ 00:15:57.927 { 00:15:57.927 "dma_device_id": "system", 00:15:57.927 "dma_device_type": 1 00:15:57.927 }, 00:15:57.927 { 00:15:57.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.927 "dma_device_type": 2 00:15:57.927 }, 00:15:57.927 { 00:15:57.927 "dma_device_id": "system", 00:15:57.927 "dma_device_type": 1 00:15:57.927 }, 00:15:57.927 { 00:15:57.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.927 "dma_device_type": 2 00:15:57.927 } 00:15:57.927 ], 00:15:57.927 "driver_specific": { 00:15:57.927 "raid": { 00:15:57.927 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:57.927 "strip_size_kb": 0, 00:15:57.927 "state": "online", 00:15:57.927 "raid_level": "raid1", 00:15:57.927 "superblock": true, 00:15:57.927 "num_base_bdevs": 2, 00:15:57.927 "num_base_bdevs_discovered": 2, 00:15:57.927 "num_base_bdevs_operational": 2, 00:15:57.927 "base_bdevs_list": [ 00:15:57.927 { 00:15:57.927 "name": "pt1", 00:15:57.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.927 "is_configured": true, 00:15:57.927 "data_offset": 256, 00:15:57.927 "data_size": 7936 00:15:57.927 }, 00:15:57.927 { 00:15:57.927 "name": "pt2", 00:15:57.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.927 "is_configured": true, 00:15:57.927 "data_offset": 256, 00:15:57.927 "data_size": 7936 00:15:57.927 } 00:15:57.927 ] 00:15:57.927 } 00:15:57.927 } 00:15:57.927 }' 00:15:57.927 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:58.187 pt2' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.187 [2024-11-21 05:02:14.827052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 17062c27-a445-4abf-87ec-9eb0095f3fe2 '!=' 17062c27-a445-4abf-87ec-9eb0095f3fe2 ']' 00:15:58.187 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.188 [2024-11-21 05:02:14.866766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.188 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.447 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.447 "name": "raid_bdev1", 00:15:58.447 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:58.447 "strip_size_kb": 0, 00:15:58.447 "state": "online", 00:15:58.447 "raid_level": "raid1", 00:15:58.447 "superblock": true, 00:15:58.447 "num_base_bdevs": 2, 00:15:58.447 "num_base_bdevs_discovered": 1, 00:15:58.447 "num_base_bdevs_operational": 1, 00:15:58.447 "base_bdevs_list": [ 00:15:58.447 { 00:15:58.447 "name": null, 00:15:58.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.447 "is_configured": false, 00:15:58.447 "data_offset": 0, 00:15:58.447 "data_size": 7936 00:15:58.447 }, 00:15:58.447 { 00:15:58.447 "name": "pt2", 00:15:58.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.447 "is_configured": true, 00:15:58.447 "data_offset": 256, 00:15:58.447 "data_size": 7936 00:15:58.447 } 00:15:58.447 ] 00:15:58.447 }' 00:15:58.447 05:02:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.447 05:02:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 [2024-11-21 05:02:15.309986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.707 [2024-11-21 05:02:15.310013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.707 [2024-11-21 05:02:15.310078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.707 [2024-11-21 05:02:15.310130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.707 [2024-11-21 05:02:15.310138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 [2024-11-21 05:02:15.381879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.707 [2024-11-21 05:02:15.381988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.707 [2024-11-21 05:02:15.382010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:58.707 [2024-11-21 05:02:15.382018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.707 [2024-11-21 05:02:15.384059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.707 [2024-11-21 05:02:15.384107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.707 [2024-11-21 05:02:15.384156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.707 [2024-11-21 05:02:15.384183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.707 [2024-11-21 05:02:15.384243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:58.707 [2024-11-21 05:02:15.384251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:58.707 [2024-11-21 05:02:15.384355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:58.707 [2024-11-21 05:02:15.384450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:58.707 [2024-11-21 05:02:15.384473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:58.707 [2024-11-21 05:02:15.384537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.707 pt2 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.707 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.967 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.967 "name": "raid_bdev1", 00:15:58.967 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:58.967 "strip_size_kb": 0, 00:15:58.967 "state": "online", 00:15:58.967 "raid_level": "raid1", 00:15:58.967 "superblock": true, 00:15:58.967 "num_base_bdevs": 2, 00:15:58.967 "num_base_bdevs_discovered": 1, 00:15:58.967 "num_base_bdevs_operational": 1, 00:15:58.967 "base_bdevs_list": [ 00:15:58.967 { 00:15:58.967 "name": null, 00:15:58.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.967 "is_configured": false, 00:15:58.967 "data_offset": 256, 00:15:58.967 "data_size": 7936 00:15:58.967 }, 00:15:58.967 { 00:15:58.967 "name": "pt2", 00:15:58.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.967 "is_configured": true, 00:15:58.967 "data_offset": 256, 00:15:58.967 "data_size": 7936 00:15:58.967 } 00:15:58.967 ] 00:15:58.967 }' 00:15:58.967 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.967 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.227 [2024-11-21 05:02:15.845086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.227 [2024-11-21 05:02:15.845122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.227 [2024-11-21 05:02:15.845183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.227 [2024-11-21 05:02:15.845226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.227 [2024-11-21 05:02:15.845236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.227 [2024-11-21 05:02:15.904968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.227 [2024-11-21 05:02:15.905076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.227 [2024-11-21 05:02:15.905140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:59.227 [2024-11-21 05:02:15.905176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.227 [2024-11-21 05:02:15.907014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.227 [2024-11-21 05:02:15.907112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.227 [2024-11-21 05:02:15.907182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.227 [2024-11-21 05:02:15.907237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.227 [2024-11-21 05:02:15.907371] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:59.227 [2024-11-21 05:02:15.907437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.227 [2024-11-21 05:02:15.907487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:59.227 [2024-11-21 05:02:15.907576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.227 [2024-11-21 05:02:15.907700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:59.227 [2024-11-21 05:02:15.907741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.227 [2024-11-21 05:02:15.907857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.227 [2024-11-21 05:02:15.907981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:59.227 pt1 00:15:59.227 [2024-11-21 05:02:15.908017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:59.227 [2024-11-21 05:02:15.908122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.227 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.487 "name": "raid_bdev1", 00:15:59.487 "uuid": "17062c27-a445-4abf-87ec-9eb0095f3fe2", 00:15:59.487 "strip_size_kb": 0, 00:15:59.487 "state": "online", 00:15:59.487 "raid_level": "raid1", 00:15:59.487 "superblock": true, 00:15:59.487 "num_base_bdevs": 2, 00:15:59.487 "num_base_bdevs_discovered": 1, 00:15:59.487 "num_base_bdevs_operational": 1, 00:15:59.487 "base_bdevs_list": [ 00:15:59.487 { 00:15:59.487 "name": null, 00:15:59.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.487 "is_configured": false, 00:15:59.487 "data_offset": 256, 00:15:59.487 "data_size": 7936 00:15:59.487 }, 00:15:59.487 { 00:15:59.487 "name": "pt2", 00:15:59.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.487 "is_configured": true, 00:15:59.487 "data_offset": 256, 00:15:59.487 "data_size": 7936 00:15:59.487 } 00:15:59.487 ] 00:15:59.487 }' 00:15:59.487 05:02:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.487 05:02:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 [2024-11-21 05:02:16.436306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 17062c27-a445-4abf-87ec-9eb0095f3fe2 '!=' 17062c27-a445-4abf-87ec-9eb0095f3fe2 ']' 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97956 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97956 ']' 00:15:59.746 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 97956 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97956 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97956' 00:16:00.006 killing process with pid 97956 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 97956 00:16:00.006 [2024-11-21 05:02:16.517856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.006 [2024-11-21 05:02:16.517923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.006 [2024-11-21 05:02:16.517967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.006 [2024-11-21 05:02:16.517976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:00.006 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 97956 00:16:00.006 [2024-11-21 05:02:16.542887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.266 05:02:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:00.266 ************************************ 00:16:00.266 END TEST raid_superblock_test_md_separate 00:16:00.266 ************************************ 00:16:00.266 00:16:00.266 real 0m4.933s 00:16:00.266 user 0m8.045s 00:16:00.266 sys 0m1.137s 00:16:00.266 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.266 05:02:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.266 05:02:16 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:00.266 05:02:16 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:00.266 05:02:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:00.266 05:02:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.266 05:02:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.266 ************************************ 00:16:00.266 START TEST raid_rebuild_test_sb_md_separate 00:16:00.266 ************************************ 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:00.266 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98265 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98265 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 98265 ']' 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.267 05:02:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.267 [2024-11-21 05:02:16.953666] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:00.267 [2024-11-21 05:02:16.953852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:00.267 Zero copy mechanism will not be used. 00:16:00.267 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98265 ] 00:16:00.526 [2024-11-21 05:02:17.127863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.526 [2024-11-21 05:02:17.153731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.526 [2024-11-21 05:02:17.196731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.526 [2024-11-21 05:02:17.196852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.096 BaseBdev1_malloc 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.096 [2024-11-21 05:02:17.779898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:01.096 [2024-11-21 05:02:17.779957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.096 [2024-11-21 05:02:17.779981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.096 [2024-11-21 05:02:17.779992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.096 [2024-11-21 05:02:17.781889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.096 [2024-11-21 05:02:17.781973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.096 BaseBdev1 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.096 BaseBdev2_malloc 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.096 [2024-11-21 05:02:17.809090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:01.096 [2024-11-21 05:02:17.809148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.096 [2024-11-21 05:02:17.809183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.096 [2024-11-21 05:02:17.809191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.096 [2024-11-21 05:02:17.811024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.096 [2024-11-21 05:02:17.811059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:01.096 BaseBdev2 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.096 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.356 spare_malloc 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.356 spare_delay 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.356 [2024-11-21 05:02:17.858594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.356 [2024-11-21 05:02:17.858647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.356 [2024-11-21 05:02:17.858684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:01.356 [2024-11-21 05:02:17.858692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.356 [2024-11-21 05:02:17.860591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.356 [2024-11-21 05:02:17.860626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.356 spare 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.356 [2024-11-21 05:02:17.870611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.356 [2024-11-21 05:02:17.872469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.356 [2024-11-21 05:02:17.872621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:01.356 [2024-11-21 05:02:17.872633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.356 [2024-11-21 05:02:17.872702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:01.356 [2024-11-21 05:02:17.872797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:01.356 [2024-11-21 05:02:17.872806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:01.356 [2024-11-21 05:02:17.872907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.356 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.356 "name": "raid_bdev1", 00:16:01.356 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:01.356 "strip_size_kb": 0, 00:16:01.356 "state": "online", 00:16:01.356 "raid_level": "raid1", 00:16:01.356 "superblock": true, 00:16:01.356 "num_base_bdevs": 2, 00:16:01.356 "num_base_bdevs_discovered": 2, 00:16:01.356 "num_base_bdevs_operational": 2, 00:16:01.356 "base_bdevs_list": [ 00:16:01.356 { 00:16:01.356 "name": "BaseBdev1", 00:16:01.356 "uuid": "735242e3-68de-5708-a9da-cac8c6c4be5c", 00:16:01.356 "is_configured": true, 00:16:01.356 "data_offset": 256, 00:16:01.356 "data_size": 7936 00:16:01.356 }, 00:16:01.356 { 00:16:01.356 "name": "BaseBdev2", 00:16:01.356 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:01.356 "is_configured": true, 00:16:01.356 "data_offset": 256, 00:16:01.356 "data_size": 7936 00:16:01.356 } 00:16:01.356 ] 00:16:01.357 }' 00:16:01.357 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.357 05:02:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.616 [2024-11-21 05:02:18.306115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:01.616 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.875 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.875 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:01.876 [2024-11-21 05:02:18.557432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:01.876 /dev/nbd0 00:16:01.876 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.139 1+0 records in 00:16:02.139 1+0 records out 00:16:02.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633915 s, 6.5 MB/s 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:02.139 05:02:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:02.745 7936+0 records in 00:16:02.745 7936+0 records out 00:16:02.745 32505856 bytes (33 MB, 31 MiB) copied, 0.620179 s, 52.4 MB/s 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.745 [2024-11-21 05:02:19.465046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.745 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.005 [2024-11-21 05:02:19.498221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.005 "name": "raid_bdev1", 00:16:03.005 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:03.005 "strip_size_kb": 0, 00:16:03.005 "state": "online", 00:16:03.005 "raid_level": "raid1", 00:16:03.005 "superblock": true, 00:16:03.005 "num_base_bdevs": 2, 00:16:03.005 "num_base_bdevs_discovered": 1, 00:16:03.005 "num_base_bdevs_operational": 1, 00:16:03.005 "base_bdevs_list": [ 00:16:03.005 { 00:16:03.005 "name": null, 00:16:03.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.005 "is_configured": false, 00:16:03.005 "data_offset": 0, 00:16:03.005 "data_size": 7936 00:16:03.005 }, 00:16:03.005 { 00:16:03.005 "name": "BaseBdev2", 00:16:03.005 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:03.005 "is_configured": true, 00:16:03.005 "data_offset": 256, 00:16:03.005 "data_size": 7936 00:16:03.005 } 00:16:03.005 ] 00:16:03.005 }' 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.005 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.265 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.265 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.265 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.265 [2024-11-21 05:02:19.957422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.265 [2024-11-21 05:02:19.960024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:03.265 [2024-11-21 05:02:19.962079] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.265 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.265 05:02:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.647 05:02:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.647 "name": "raid_bdev1", 00:16:04.647 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:04.647 "strip_size_kb": 0, 00:16:04.647 "state": "online", 00:16:04.647 "raid_level": "raid1", 00:16:04.647 "superblock": true, 00:16:04.647 "num_base_bdevs": 2, 00:16:04.647 "num_base_bdevs_discovered": 2, 00:16:04.647 "num_base_bdevs_operational": 2, 00:16:04.647 "process": { 00:16:04.647 "type": "rebuild", 00:16:04.647 "target": "spare", 00:16:04.647 "progress": { 00:16:04.647 "blocks": 2560, 00:16:04.647 "percent": 32 00:16:04.647 } 00:16:04.647 }, 00:16:04.647 "base_bdevs_list": [ 00:16:04.647 { 00:16:04.647 "name": "spare", 00:16:04.647 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:04.647 "is_configured": true, 00:16:04.647 "data_offset": 256, 00:16:04.647 "data_size": 7936 00:16:04.647 }, 00:16:04.647 { 00:16:04.647 "name": "BaseBdev2", 00:16:04.647 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:04.647 "is_configured": true, 00:16:04.647 "data_offset": 256, 00:16:04.647 "data_size": 7936 00:16:04.647 } 00:16:04.647 ] 00:16:04.647 }' 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.647 [2024-11-21 05:02:21.124884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.647 [2024-11-21 05:02:21.166763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.647 [2024-11-21 05:02:21.166821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.647 [2024-11-21 05:02:21.166838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.647 [2024-11-21 05:02:21.166844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.647 "name": "raid_bdev1", 00:16:04.647 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:04.647 "strip_size_kb": 0, 00:16:04.647 "state": "online", 00:16:04.647 "raid_level": "raid1", 00:16:04.647 "superblock": true, 00:16:04.647 "num_base_bdevs": 2, 00:16:04.647 "num_base_bdevs_discovered": 1, 00:16:04.647 "num_base_bdevs_operational": 1, 00:16:04.647 "base_bdevs_list": [ 00:16:04.647 { 00:16:04.647 "name": null, 00:16:04.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.647 "is_configured": false, 00:16:04.647 "data_offset": 0, 00:16:04.647 "data_size": 7936 00:16:04.647 }, 00:16:04.647 { 00:16:04.647 "name": "BaseBdev2", 00:16:04.647 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:04.647 "is_configured": true, 00:16:04.647 "data_offset": 256, 00:16:04.647 "data_size": 7936 00:16:04.647 } 00:16:04.647 ] 00:16:04.647 }' 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.647 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.217 "name": "raid_bdev1", 00:16:05.217 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:05.217 "strip_size_kb": 0, 00:16:05.217 "state": "online", 00:16:05.217 "raid_level": "raid1", 00:16:05.217 "superblock": true, 00:16:05.217 "num_base_bdevs": 2, 00:16:05.217 "num_base_bdevs_discovered": 1, 00:16:05.217 "num_base_bdevs_operational": 1, 00:16:05.217 "base_bdevs_list": [ 00:16:05.217 { 00:16:05.217 "name": null, 00:16:05.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.217 "is_configured": false, 00:16:05.217 "data_offset": 0, 00:16:05.217 "data_size": 7936 00:16:05.217 }, 00:16:05.217 { 00:16:05.217 "name": "BaseBdev2", 00:16:05.217 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:05.217 "is_configured": true, 00:16:05.217 "data_offset": 256, 00:16:05.217 "data_size": 7936 00:16:05.217 } 00:16:05.217 ] 00:16:05.217 }' 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.217 [2024-11-21 05:02:21.784908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.217 [2024-11-21 05:02:21.787430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:05.217 [2024-11-21 05:02:21.789274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.217 05:02:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.156 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.157 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.157 "name": "raid_bdev1", 00:16:06.157 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:06.157 "strip_size_kb": 0, 00:16:06.157 "state": "online", 00:16:06.157 "raid_level": "raid1", 00:16:06.157 "superblock": true, 00:16:06.157 "num_base_bdevs": 2, 00:16:06.157 "num_base_bdevs_discovered": 2, 00:16:06.157 "num_base_bdevs_operational": 2, 00:16:06.157 "process": { 00:16:06.157 "type": "rebuild", 00:16:06.157 "target": "spare", 00:16:06.157 "progress": { 00:16:06.157 "blocks": 2560, 00:16:06.157 "percent": 32 00:16:06.157 } 00:16:06.157 }, 00:16:06.157 "base_bdevs_list": [ 00:16:06.157 { 00:16:06.157 "name": "spare", 00:16:06.157 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:06.157 "is_configured": true, 00:16:06.157 "data_offset": 256, 00:16:06.157 "data_size": 7936 00:16:06.157 }, 00:16:06.157 { 00:16:06.157 "name": "BaseBdev2", 00:16:06.157 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:06.157 "is_configured": true, 00:16:06.157 "data_offset": 256, 00:16:06.157 "data_size": 7936 00:16:06.157 } 00:16:06.157 ] 00:16:06.157 }' 00:16:06.157 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:06.416 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.416 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.416 "name": "raid_bdev1", 00:16:06.416 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:06.416 "strip_size_kb": 0, 00:16:06.416 "state": "online", 00:16:06.416 "raid_level": "raid1", 00:16:06.416 "superblock": true, 00:16:06.416 "num_base_bdevs": 2, 00:16:06.416 "num_base_bdevs_discovered": 2, 00:16:06.416 "num_base_bdevs_operational": 2, 00:16:06.416 "process": { 00:16:06.416 "type": "rebuild", 00:16:06.416 "target": "spare", 00:16:06.416 "progress": { 00:16:06.416 "blocks": 2816, 00:16:06.416 "percent": 35 00:16:06.416 } 00:16:06.416 }, 00:16:06.416 "base_bdevs_list": [ 00:16:06.416 { 00:16:06.416 "name": "spare", 00:16:06.416 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:06.416 "is_configured": true, 00:16:06.416 "data_offset": 256, 00:16:06.416 "data_size": 7936 00:16:06.416 }, 00:16:06.416 { 00:16:06.416 "name": "BaseBdev2", 00:16:06.416 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:06.416 "is_configured": true, 00:16:06.417 "data_offset": 256, 00:16:06.417 "data_size": 7936 00:16:06.417 } 00:16:06.417 ] 00:16:06.417 }' 00:16:06.417 05:02:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.417 05:02:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.417 05:02:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.417 05:02:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.417 05:02:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.796 "name": "raid_bdev1", 00:16:07.796 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:07.796 "strip_size_kb": 0, 00:16:07.796 "state": "online", 00:16:07.796 "raid_level": "raid1", 00:16:07.796 "superblock": true, 00:16:07.796 "num_base_bdevs": 2, 00:16:07.796 "num_base_bdevs_discovered": 2, 00:16:07.796 "num_base_bdevs_operational": 2, 00:16:07.796 "process": { 00:16:07.796 "type": "rebuild", 00:16:07.796 "target": "spare", 00:16:07.796 "progress": { 00:16:07.796 "blocks": 5888, 00:16:07.796 "percent": 74 00:16:07.796 } 00:16:07.796 }, 00:16:07.796 "base_bdevs_list": [ 00:16:07.796 { 00:16:07.796 "name": "spare", 00:16:07.796 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:07.796 "is_configured": true, 00:16:07.796 "data_offset": 256, 00:16:07.796 "data_size": 7936 00:16:07.796 }, 00:16:07.796 { 00:16:07.796 "name": "BaseBdev2", 00:16:07.796 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:07.796 "is_configured": true, 00:16:07.796 "data_offset": 256, 00:16:07.796 "data_size": 7936 00:16:07.796 } 00:16:07.796 ] 00:16:07.796 }' 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.796 05:02:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.366 [2024-11-21 05:02:24.899912] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:08.366 [2024-11-21 05:02:24.900037] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:08.366 [2024-11-21 05:02:24.900210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.625 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.625 "name": "raid_bdev1", 00:16:08.625 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:08.625 "strip_size_kb": 0, 00:16:08.625 "state": "online", 00:16:08.625 "raid_level": "raid1", 00:16:08.625 "superblock": true, 00:16:08.625 "num_base_bdevs": 2, 00:16:08.625 "num_base_bdevs_discovered": 2, 00:16:08.625 "num_base_bdevs_operational": 2, 00:16:08.625 "base_bdevs_list": [ 00:16:08.625 { 00:16:08.625 "name": "spare", 00:16:08.625 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:08.625 "is_configured": true, 00:16:08.625 "data_offset": 256, 00:16:08.625 "data_size": 7936 00:16:08.625 }, 00:16:08.625 { 00:16:08.625 "name": "BaseBdev2", 00:16:08.625 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:08.625 "is_configured": true, 00:16:08.625 "data_offset": 256, 00:16:08.625 "data_size": 7936 00:16:08.625 } 00:16:08.625 ] 00:16:08.625 }' 00:16:08.626 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.885 "name": "raid_bdev1", 00:16:08.885 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:08.885 "strip_size_kb": 0, 00:16:08.885 "state": "online", 00:16:08.885 "raid_level": "raid1", 00:16:08.885 "superblock": true, 00:16:08.885 "num_base_bdevs": 2, 00:16:08.885 "num_base_bdevs_discovered": 2, 00:16:08.885 "num_base_bdevs_operational": 2, 00:16:08.885 "base_bdevs_list": [ 00:16:08.885 { 00:16:08.885 "name": "spare", 00:16:08.885 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:08.885 "is_configured": true, 00:16:08.885 "data_offset": 256, 00:16:08.885 "data_size": 7936 00:16:08.885 }, 00:16:08.885 { 00:16:08.885 "name": "BaseBdev2", 00:16:08.885 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:08.885 "is_configured": true, 00:16:08.885 "data_offset": 256, 00:16:08.885 "data_size": 7936 00:16:08.885 } 00:16:08.885 ] 00:16:08.885 }' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.885 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.886 "name": "raid_bdev1", 00:16:08.886 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:08.886 "strip_size_kb": 0, 00:16:08.886 "state": "online", 00:16:08.886 "raid_level": "raid1", 00:16:08.886 "superblock": true, 00:16:08.886 "num_base_bdevs": 2, 00:16:08.886 "num_base_bdevs_discovered": 2, 00:16:08.886 "num_base_bdevs_operational": 2, 00:16:08.886 "base_bdevs_list": [ 00:16:08.886 { 00:16:08.886 "name": "spare", 00:16:08.886 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 256, 00:16:08.886 "data_size": 7936 00:16:08.886 }, 00:16:08.886 { 00:16:08.886 "name": "BaseBdev2", 00:16:08.886 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 256, 00:16:08.886 "data_size": 7936 00:16:08.886 } 00:16:08.886 ] 00:16:08.886 }' 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.886 05:02:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 [2024-11-21 05:02:26.021371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.454 [2024-11-21 05:02:26.021398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.454 [2024-11-21 05:02:26.021483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.454 [2024-11-21 05:02:26.021548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.454 [2024-11-21 05:02:26.021568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.454 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:09.714 /dev/nbd0 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.714 1+0 records in 00:16:09.714 1+0 records out 00:16:09.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379066 s, 10.8 MB/s 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.714 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:09.974 /dev/nbd1 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.974 1+0 records in 00:16:09.974 1+0 records out 00:16:09.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045178 s, 9.1 MB/s 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.974 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.234 05:02:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.494 [2024-11-21 05:02:27.144014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.494 [2024-11-21 05:02:27.144134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.494 [2024-11-21 05:02:27.144160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:10.494 [2024-11-21 05:02:27.144174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.494 [2024-11-21 05:02:27.146112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.494 [2024-11-21 05:02:27.146149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.494 [2024-11-21 05:02:27.146203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.494 [2024-11-21 05:02:27.146264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.494 [2024-11-21 05:02:27.146401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.494 spare 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.494 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.754 [2024-11-21 05:02:27.246299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:10.754 [2024-11-21 05:02:27.246359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:10.754 [2024-11-21 05:02:27.246495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:10.754 [2024-11-21 05:02:27.246687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:10.754 [2024-11-21 05:02:27.246735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:10.754 [2024-11-21 05:02:27.246900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.754 "name": "raid_bdev1", 00:16:10.754 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:10.754 "strip_size_kb": 0, 00:16:10.754 "state": "online", 00:16:10.754 "raid_level": "raid1", 00:16:10.754 "superblock": true, 00:16:10.754 "num_base_bdevs": 2, 00:16:10.754 "num_base_bdevs_discovered": 2, 00:16:10.754 "num_base_bdevs_operational": 2, 00:16:10.754 "base_bdevs_list": [ 00:16:10.754 { 00:16:10.754 "name": "spare", 00:16:10.754 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:10.754 "is_configured": true, 00:16:10.754 "data_offset": 256, 00:16:10.754 "data_size": 7936 00:16:10.754 }, 00:16:10.754 { 00:16:10.754 "name": "BaseBdev2", 00:16:10.754 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:10.754 "is_configured": true, 00:16:10.754 "data_offset": 256, 00:16:10.754 "data_size": 7936 00:16:10.754 } 00:16:10.754 ] 00:16:10.754 }' 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.754 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.014 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.014 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.015 "name": "raid_bdev1", 00:16:11.015 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:11.015 "strip_size_kb": 0, 00:16:11.015 "state": "online", 00:16:11.015 "raid_level": "raid1", 00:16:11.015 "superblock": true, 00:16:11.015 "num_base_bdevs": 2, 00:16:11.015 "num_base_bdevs_discovered": 2, 00:16:11.015 "num_base_bdevs_operational": 2, 00:16:11.015 "base_bdevs_list": [ 00:16:11.015 { 00:16:11.015 "name": "spare", 00:16:11.015 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:11.015 "is_configured": true, 00:16:11.015 "data_offset": 256, 00:16:11.015 "data_size": 7936 00:16:11.015 }, 00:16:11.015 { 00:16:11.015 "name": "BaseBdev2", 00:16:11.015 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:11.015 "is_configured": true, 00:16:11.015 "data_offset": 256, 00:16:11.015 "data_size": 7936 00:16:11.015 } 00:16:11.015 ] 00:16:11.015 }' 00:16:11.015 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.273 [2024-11-21 05:02:27.882815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.273 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.274 "name": "raid_bdev1", 00:16:11.274 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:11.274 "strip_size_kb": 0, 00:16:11.274 "state": "online", 00:16:11.274 "raid_level": "raid1", 00:16:11.274 "superblock": true, 00:16:11.274 "num_base_bdevs": 2, 00:16:11.274 "num_base_bdevs_discovered": 1, 00:16:11.274 "num_base_bdevs_operational": 1, 00:16:11.274 "base_bdevs_list": [ 00:16:11.274 { 00:16:11.274 "name": null, 00:16:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.274 "is_configured": false, 00:16:11.274 "data_offset": 0, 00:16:11.274 "data_size": 7936 00:16:11.274 }, 00:16:11.274 { 00:16:11.274 "name": "BaseBdev2", 00:16:11.274 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:11.274 "is_configured": true, 00:16:11.274 "data_offset": 256, 00:16:11.274 "data_size": 7936 00:16:11.274 } 00:16:11.274 ] 00:16:11.274 }' 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.274 05:02:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.842 05:02:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.842 05:02:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.842 05:02:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.842 [2024-11-21 05:02:28.338074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.842 [2024-11-21 05:02:28.338328] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.842 [2024-11-21 05:02:28.338387] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.842 [2024-11-21 05:02:28.338473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.842 [2024-11-21 05:02:28.340940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:11.842 [2024-11-21 05:02:28.342818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.842 05:02:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.842 05:02:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.779 "name": "raid_bdev1", 00:16:12.779 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:12.779 "strip_size_kb": 0, 00:16:12.779 "state": "online", 00:16:12.779 "raid_level": "raid1", 00:16:12.779 "superblock": true, 00:16:12.779 "num_base_bdevs": 2, 00:16:12.779 "num_base_bdevs_discovered": 2, 00:16:12.779 "num_base_bdevs_operational": 2, 00:16:12.779 "process": { 00:16:12.779 "type": "rebuild", 00:16:12.779 "target": "spare", 00:16:12.779 "progress": { 00:16:12.779 "blocks": 2560, 00:16:12.779 "percent": 32 00:16:12.779 } 00:16:12.779 }, 00:16:12.779 "base_bdevs_list": [ 00:16:12.779 { 00:16:12.779 "name": "spare", 00:16:12.779 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:12.779 "is_configured": true, 00:16:12.779 "data_offset": 256, 00:16:12.779 "data_size": 7936 00:16:12.779 }, 00:16:12.779 { 00:16:12.779 "name": "BaseBdev2", 00:16:12.779 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:12.779 "is_configured": true, 00:16:12.779 "data_offset": 256, 00:16:12.779 "data_size": 7936 00:16:12.779 } 00:16:12.779 ] 00:16:12.779 }' 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.779 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 [2024-11-21 05:02:29.505735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.039 [2024-11-21 05:02:29.547037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.039 [2024-11-21 05:02:29.547161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.039 [2024-11-21 05:02:29.547198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.039 [2024-11-21 05:02:29.547206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.039 "name": "raid_bdev1", 00:16:13.039 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:13.039 "strip_size_kb": 0, 00:16:13.039 "state": "online", 00:16:13.039 "raid_level": "raid1", 00:16:13.039 "superblock": true, 00:16:13.039 "num_base_bdevs": 2, 00:16:13.039 "num_base_bdevs_discovered": 1, 00:16:13.039 "num_base_bdevs_operational": 1, 00:16:13.039 "base_bdevs_list": [ 00:16:13.039 { 00:16:13.039 "name": null, 00:16:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.039 "is_configured": false, 00:16:13.039 "data_offset": 0, 00:16:13.039 "data_size": 7936 00:16:13.039 }, 00:16:13.039 { 00:16:13.039 "name": "BaseBdev2", 00:16:13.039 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:13.039 "is_configured": true, 00:16:13.039 "data_offset": 256, 00:16:13.039 "data_size": 7936 00:16:13.039 } 00:16:13.039 ] 00:16:13.039 }' 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.039 05:02:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.299 05:02:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.299 05:02:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.299 05:02:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.299 [2024-11-21 05:02:30.009551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:13.299 [2024-11-21 05:02:30.009659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.299 [2024-11-21 05:02:30.009700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:13.299 [2024-11-21 05:02:30.009728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.299 [2024-11-21 05:02:30.009995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.299 [2024-11-21 05:02:30.010049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:13.299 [2024-11-21 05:02:30.010154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:13.299 [2024-11-21 05:02:30.010193] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.299 [2024-11-21 05:02:30.010262] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.299 [2024-11-21 05:02:30.010327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.300 [2024-11-21 05:02:30.012422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:13.300 [2024-11-21 05:02:30.014250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.300 spare 00:16:13.300 05:02:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.300 05:02:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.680 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.680 "name": "raid_bdev1", 00:16:14.680 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:14.680 "strip_size_kb": 0, 00:16:14.680 "state": "online", 00:16:14.680 "raid_level": "raid1", 00:16:14.680 "superblock": true, 00:16:14.680 "num_base_bdevs": 2, 00:16:14.680 "num_base_bdevs_discovered": 2, 00:16:14.680 "num_base_bdevs_operational": 2, 00:16:14.680 "process": { 00:16:14.680 "type": "rebuild", 00:16:14.680 "target": "spare", 00:16:14.680 "progress": { 00:16:14.680 "blocks": 2560, 00:16:14.680 "percent": 32 00:16:14.680 } 00:16:14.680 }, 00:16:14.680 "base_bdevs_list": [ 00:16:14.680 { 00:16:14.680 "name": "spare", 00:16:14.680 "uuid": "2afc47d3-4777-5144-98b1-441134e37cc5", 00:16:14.680 "is_configured": true, 00:16:14.680 "data_offset": 256, 00:16:14.680 "data_size": 7936 00:16:14.680 }, 00:16:14.680 { 00:16:14.680 "name": "BaseBdev2", 00:16:14.680 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:14.680 "is_configured": true, 00:16:14.680 "data_offset": 256, 00:16:14.680 "data_size": 7936 00:16:14.680 } 00:16:14.680 ] 00:16:14.681 }' 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 [2024-11-21 05:02:31.173239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.681 [2024-11-21 05:02:31.218438] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.681 [2024-11-21 05:02:31.218496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.681 [2024-11-21 05:02:31.218509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.681 [2024-11-21 05:02:31.218517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.681 "name": "raid_bdev1", 00:16:14.681 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:14.681 "strip_size_kb": 0, 00:16:14.681 "state": "online", 00:16:14.681 "raid_level": "raid1", 00:16:14.681 "superblock": true, 00:16:14.681 "num_base_bdevs": 2, 00:16:14.681 "num_base_bdevs_discovered": 1, 00:16:14.681 "num_base_bdevs_operational": 1, 00:16:14.681 "base_bdevs_list": [ 00:16:14.681 { 00:16:14.681 "name": null, 00:16:14.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.681 "is_configured": false, 00:16:14.681 "data_offset": 0, 00:16:14.681 "data_size": 7936 00:16:14.681 }, 00:16:14.681 { 00:16:14.681 "name": "BaseBdev2", 00:16:14.681 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:14.681 "is_configured": true, 00:16:14.681 "data_offset": 256, 00:16:14.681 "data_size": 7936 00:16:14.681 } 00:16:14.681 ] 00:16:14.681 }' 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.681 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.941 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.199 "name": "raid_bdev1", 00:16:15.199 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:15.199 "strip_size_kb": 0, 00:16:15.199 "state": "online", 00:16:15.199 "raid_level": "raid1", 00:16:15.199 "superblock": true, 00:16:15.199 "num_base_bdevs": 2, 00:16:15.199 "num_base_bdevs_discovered": 1, 00:16:15.199 "num_base_bdevs_operational": 1, 00:16:15.199 "base_bdevs_list": [ 00:16:15.199 { 00:16:15.199 "name": null, 00:16:15.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.199 "is_configured": false, 00:16:15.199 "data_offset": 0, 00:16:15.199 "data_size": 7936 00:16:15.199 }, 00:16:15.199 { 00:16:15.199 "name": "BaseBdev2", 00:16:15.199 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:15.199 "is_configured": true, 00:16:15.199 "data_offset": 256, 00:16:15.199 "data_size": 7936 00:16:15.199 } 00:16:15.199 ] 00:16:15.199 }' 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.199 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 [2024-11-21 05:02:31.764496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.199 [2024-11-21 05:02:31.764591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.199 [2024-11-21 05:02:31.764615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:15.199 [2024-11-21 05:02:31.764626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.199 [2024-11-21 05:02:31.764853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.199 [2024-11-21 05:02:31.764869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.199 [2024-11-21 05:02:31.764916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:15.199 [2024-11-21 05:02:31.764944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.199 [2024-11-21 05:02:31.764954] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:15.200 [2024-11-21 05:02:31.764966] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:15.200 BaseBdev1 00:16:15.200 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.200 05:02:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.138 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.139 "name": "raid_bdev1", 00:16:16.139 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:16.139 "strip_size_kb": 0, 00:16:16.139 "state": "online", 00:16:16.139 "raid_level": "raid1", 00:16:16.139 "superblock": true, 00:16:16.139 "num_base_bdevs": 2, 00:16:16.139 "num_base_bdevs_discovered": 1, 00:16:16.139 "num_base_bdevs_operational": 1, 00:16:16.139 "base_bdevs_list": [ 00:16:16.139 { 00:16:16.139 "name": null, 00:16:16.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.139 "is_configured": false, 00:16:16.139 "data_offset": 0, 00:16:16.139 "data_size": 7936 00:16:16.139 }, 00:16:16.139 { 00:16:16.139 "name": "BaseBdev2", 00:16:16.139 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:16.139 "is_configured": true, 00:16:16.139 "data_offset": 256, 00:16:16.139 "data_size": 7936 00:16:16.139 } 00:16:16.139 ] 00:16:16.139 }' 00:16:16.139 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.139 05:02:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.708 "name": "raid_bdev1", 00:16:16.708 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:16.708 "strip_size_kb": 0, 00:16:16.708 "state": "online", 00:16:16.708 "raid_level": "raid1", 00:16:16.708 "superblock": true, 00:16:16.708 "num_base_bdevs": 2, 00:16:16.708 "num_base_bdevs_discovered": 1, 00:16:16.708 "num_base_bdevs_operational": 1, 00:16:16.708 "base_bdevs_list": [ 00:16:16.708 { 00:16:16.708 "name": null, 00:16:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.708 "is_configured": false, 00:16:16.708 "data_offset": 0, 00:16:16.708 "data_size": 7936 00:16:16.708 }, 00:16:16.708 { 00:16:16.708 "name": "BaseBdev2", 00:16:16.708 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:16.708 "is_configured": true, 00:16:16.708 "data_offset": 256, 00:16:16.708 "data_size": 7936 00:16:16.708 } 00:16:16.708 ] 00:16:16.708 }' 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.708 [2024-11-21 05:02:33.357780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.708 [2024-11-21 05:02:33.358003] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:16.708 [2024-11-21 05:02:33.358065] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:16.708 request: 00:16:16.708 { 00:16:16.708 "base_bdev": "BaseBdev1", 00:16:16.708 "raid_bdev": "raid_bdev1", 00:16:16.708 "method": "bdev_raid_add_base_bdev", 00:16:16.708 "req_id": 1 00:16:16.708 } 00:16:16.708 Got JSON-RPC error response 00:16:16.708 response: 00:16:16.708 { 00:16:16.708 "code": -22, 00:16:16.708 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:16.708 } 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.708 05:02:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.647 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.906 "name": "raid_bdev1", 00:16:17.906 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:17.906 "strip_size_kb": 0, 00:16:17.906 "state": "online", 00:16:17.906 "raid_level": "raid1", 00:16:17.906 "superblock": true, 00:16:17.906 "num_base_bdevs": 2, 00:16:17.906 "num_base_bdevs_discovered": 1, 00:16:17.906 "num_base_bdevs_operational": 1, 00:16:17.906 "base_bdevs_list": [ 00:16:17.906 { 00:16:17.906 "name": null, 00:16:17.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.906 "is_configured": false, 00:16:17.906 "data_offset": 0, 00:16:17.906 "data_size": 7936 00:16:17.906 }, 00:16:17.906 { 00:16:17.906 "name": "BaseBdev2", 00:16:17.906 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:17.906 "is_configured": true, 00:16:17.906 "data_offset": 256, 00:16:17.906 "data_size": 7936 00:16:17.906 } 00:16:17.906 ] 00:16:17.906 }' 00:16:17.906 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.907 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.166 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.426 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.426 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.426 "name": "raid_bdev1", 00:16:18.426 "uuid": "56af59d9-02ac-4c8b-a7da-f22ab84dd299", 00:16:18.426 "strip_size_kb": 0, 00:16:18.426 "state": "online", 00:16:18.426 "raid_level": "raid1", 00:16:18.426 "superblock": true, 00:16:18.426 "num_base_bdevs": 2, 00:16:18.426 "num_base_bdevs_discovered": 1, 00:16:18.426 "num_base_bdevs_operational": 1, 00:16:18.426 "base_bdevs_list": [ 00:16:18.426 { 00:16:18.426 "name": null, 00:16:18.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.426 "is_configured": false, 00:16:18.426 "data_offset": 0, 00:16:18.426 "data_size": 7936 00:16:18.426 }, 00:16:18.426 { 00:16:18.426 "name": "BaseBdev2", 00:16:18.426 "uuid": "360e5b7d-3490-5a4a-b654-466bdd954400", 00:16:18.426 "is_configured": true, 00:16:18.426 "data_offset": 256, 00:16:18.426 "data_size": 7936 00:16:18.426 } 00:16:18.426 ] 00:16:18.426 }' 00:16:18.426 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.426 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.426 05:02:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98265 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 98265 ']' 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 98265 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98265 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98265' 00:16:18.426 killing process with pid 98265 00:16:18.426 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.426 00:16:18.426 Latency(us) 00:16:18.426 [2024-11-21T05:02:35.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.426 [2024-11-21T05:02:35.161Z] =================================================================================================================== 00:16:18.426 [2024-11-21T05:02:35.161Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 98265 00:16:18.426 [2024-11-21 05:02:35.087491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.426 [2024-11-21 05:02:35.087618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.426 [2024-11-21 05:02:35.087668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.426 [2024-11-21 05:02:35.087677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:18.426 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 98265 00:16:18.426 [2024-11-21 05:02:35.122116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.686 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:18.686 00:16:18.686 real 0m18.474s 00:16:18.686 user 0m24.551s 00:16:18.686 sys 0m2.729s 00:16:18.686 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.686 05:02:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.686 ************************************ 00:16:18.686 END TEST raid_rebuild_test_sb_md_separate 00:16:18.686 ************************************ 00:16:18.686 05:02:35 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:18.686 05:02:35 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:18.686 05:02:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:18.686 05:02:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.686 05:02:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.686 ************************************ 00:16:18.686 START TEST raid_state_function_test_sb_md_interleaved 00:16:18.686 ************************************ 00:16:18.686 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:18.686 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98946 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98946' 00:16:18.687 Process raid pid: 98946 00:16:18.687 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98946 00:16:18.946 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98946 ']' 00:16:18.947 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.947 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.947 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.947 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.947 05:02:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.947 [2024-11-21 05:02:35.501508] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:18.947 [2024-11-21 05:02:35.501686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.947 [2024-11-21 05:02:35.674402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.206 [2024-11-21 05:02:35.701478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.206 [2024-11-21 05:02:35.744463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.206 [2024-11-21 05:02:35.744500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 [2024-11-21 05:02:36.325831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.775 [2024-11-21 05:02:36.325877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.775 [2024-11-21 05:02:36.325887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.775 [2024-11-21 05:02:36.325895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.775 "name": "Existed_Raid", 00:16:19.775 "uuid": "03f9f9d5-4794-4b6b-b49a-e01017088a21", 00:16:19.775 "strip_size_kb": 0, 00:16:19.775 "state": "configuring", 00:16:19.775 "raid_level": "raid1", 00:16:19.775 "superblock": true, 00:16:19.775 "num_base_bdevs": 2, 00:16:19.775 "num_base_bdevs_discovered": 0, 00:16:19.775 "num_base_bdevs_operational": 2, 00:16:19.775 "base_bdevs_list": [ 00:16:19.775 { 00:16:19.775 "name": "BaseBdev1", 00:16:19.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.775 "is_configured": false, 00:16:19.775 "data_offset": 0, 00:16:19.775 "data_size": 0 00:16:19.775 }, 00:16:19.775 { 00:16:19.775 "name": "BaseBdev2", 00:16:19.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.775 "is_configured": false, 00:16:19.775 "data_offset": 0, 00:16:19.775 "data_size": 0 00:16:19.775 } 00:16:19.775 ] 00:16:19.775 }' 00:16:19.775 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.776 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 [2024-11-21 05:02:36.788940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.346 [2024-11-21 05:02:36.788975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 [2024-11-21 05:02:36.800927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.346 [2024-11-21 05:02:36.800965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.346 [2024-11-21 05:02:36.800973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.346 [2024-11-21 05:02:36.800983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 [2024-11-21 05:02:36.821974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.346 BaseBdev1 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 [ 00:16:20.346 { 00:16:20.346 "name": "BaseBdev1", 00:16:20.346 "aliases": [ 00:16:20.346 "913a2d40-db64-4a75-a53b-aa81c50f6216" 00:16:20.346 ], 00:16:20.346 "product_name": "Malloc disk", 00:16:20.346 "block_size": 4128, 00:16:20.346 "num_blocks": 8192, 00:16:20.346 "uuid": "913a2d40-db64-4a75-a53b-aa81c50f6216", 00:16:20.346 "md_size": 32, 00:16:20.346 "md_interleave": true, 00:16:20.346 "dif_type": 0, 00:16:20.346 "assigned_rate_limits": { 00:16:20.346 "rw_ios_per_sec": 0, 00:16:20.346 "rw_mbytes_per_sec": 0, 00:16:20.346 "r_mbytes_per_sec": 0, 00:16:20.346 "w_mbytes_per_sec": 0 00:16:20.346 }, 00:16:20.346 "claimed": true, 00:16:20.346 "claim_type": "exclusive_write", 00:16:20.346 "zoned": false, 00:16:20.346 "supported_io_types": { 00:16:20.346 "read": true, 00:16:20.346 "write": true, 00:16:20.346 "unmap": true, 00:16:20.346 "flush": true, 00:16:20.346 "reset": true, 00:16:20.346 "nvme_admin": false, 00:16:20.346 "nvme_io": false, 00:16:20.346 "nvme_io_md": false, 00:16:20.346 "write_zeroes": true, 00:16:20.346 "zcopy": true, 00:16:20.346 "get_zone_info": false, 00:16:20.346 "zone_management": false, 00:16:20.346 "zone_append": false, 00:16:20.346 "compare": false, 00:16:20.346 "compare_and_write": false, 00:16:20.346 "abort": true, 00:16:20.346 "seek_hole": false, 00:16:20.346 "seek_data": false, 00:16:20.346 "copy": true, 00:16:20.346 "nvme_iov_md": false 00:16:20.346 }, 00:16:20.346 "memory_domains": [ 00:16:20.346 { 00:16:20.346 "dma_device_id": "system", 00:16:20.346 "dma_device_type": 1 00:16:20.346 }, 00:16:20.346 { 00:16:20.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.346 "dma_device_type": 2 00:16:20.346 } 00:16:20.346 ], 00:16:20.346 "driver_specific": {} 00:16:20.346 } 00:16:20.346 ] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.346 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.347 "name": "Existed_Raid", 00:16:20.347 "uuid": "85f27e4e-7217-4620-ad4c-775c69141b80", 00:16:20.347 "strip_size_kb": 0, 00:16:20.347 "state": "configuring", 00:16:20.347 "raid_level": "raid1", 00:16:20.347 "superblock": true, 00:16:20.347 "num_base_bdevs": 2, 00:16:20.347 "num_base_bdevs_discovered": 1, 00:16:20.347 "num_base_bdevs_operational": 2, 00:16:20.347 "base_bdevs_list": [ 00:16:20.347 { 00:16:20.347 "name": "BaseBdev1", 00:16:20.347 "uuid": "913a2d40-db64-4a75-a53b-aa81c50f6216", 00:16:20.347 "is_configured": true, 00:16:20.347 "data_offset": 256, 00:16:20.347 "data_size": 7936 00:16:20.347 }, 00:16:20.347 { 00:16:20.347 "name": "BaseBdev2", 00:16:20.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.347 "is_configured": false, 00:16:20.347 "data_offset": 0, 00:16:20.347 "data_size": 0 00:16:20.347 } 00:16:20.347 ] 00:16:20.347 }' 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.347 05:02:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.606 [2024-11-21 05:02:37.301211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.606 [2024-11-21 05:02:37.301291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.606 [2024-11-21 05:02:37.313228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.606 [2024-11-21 05:02:37.315087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.606 [2024-11-21 05:02:37.315185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:20.606 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.865 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.865 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.865 "name": "Existed_Raid", 00:16:20.865 "uuid": "22c1ea9a-3920-4690-9c10-c502856e9822", 00:16:20.865 "strip_size_kb": 0, 00:16:20.865 "state": "configuring", 00:16:20.865 "raid_level": "raid1", 00:16:20.865 "superblock": true, 00:16:20.865 "num_base_bdevs": 2, 00:16:20.865 "num_base_bdevs_discovered": 1, 00:16:20.865 "num_base_bdevs_operational": 2, 00:16:20.865 "base_bdevs_list": [ 00:16:20.865 { 00:16:20.865 "name": "BaseBdev1", 00:16:20.865 "uuid": "913a2d40-db64-4a75-a53b-aa81c50f6216", 00:16:20.865 "is_configured": true, 00:16:20.865 "data_offset": 256, 00:16:20.865 "data_size": 7936 00:16:20.865 }, 00:16:20.865 { 00:16:20.865 "name": "BaseBdev2", 00:16:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.865 "is_configured": false, 00:16:20.865 "data_offset": 0, 00:16:20.865 "data_size": 0 00:16:20.865 } 00:16:20.865 ] 00:16:20.865 }' 00:16:20.865 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.865 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.124 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:21.124 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.124 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.124 [2024-11-21 05:02:37.775761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.124 [2024-11-21 05:02:37.775924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:21.124 [2024-11-21 05:02:37.775946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:21.124 [2024-11-21 05:02:37.776046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:21.124 [2024-11-21 05:02:37.776138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:21.124 [2024-11-21 05:02:37.776152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:21.124 [2024-11-21 05:02:37.776213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.124 BaseBdev2 00:16:21.124 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.125 [ 00:16:21.125 { 00:16:21.125 "name": "BaseBdev2", 00:16:21.125 "aliases": [ 00:16:21.125 "949ddac1-04d1-4c8e-a495-b774f2512db2" 00:16:21.125 ], 00:16:21.125 "product_name": "Malloc disk", 00:16:21.125 "block_size": 4128, 00:16:21.125 "num_blocks": 8192, 00:16:21.125 "uuid": "949ddac1-04d1-4c8e-a495-b774f2512db2", 00:16:21.125 "md_size": 32, 00:16:21.125 "md_interleave": true, 00:16:21.125 "dif_type": 0, 00:16:21.125 "assigned_rate_limits": { 00:16:21.125 "rw_ios_per_sec": 0, 00:16:21.125 "rw_mbytes_per_sec": 0, 00:16:21.125 "r_mbytes_per_sec": 0, 00:16:21.125 "w_mbytes_per_sec": 0 00:16:21.125 }, 00:16:21.125 "claimed": true, 00:16:21.125 "claim_type": "exclusive_write", 00:16:21.125 "zoned": false, 00:16:21.125 "supported_io_types": { 00:16:21.125 "read": true, 00:16:21.125 "write": true, 00:16:21.125 "unmap": true, 00:16:21.125 "flush": true, 00:16:21.125 "reset": true, 00:16:21.125 "nvme_admin": false, 00:16:21.125 "nvme_io": false, 00:16:21.125 "nvme_io_md": false, 00:16:21.125 "write_zeroes": true, 00:16:21.125 "zcopy": true, 00:16:21.125 "get_zone_info": false, 00:16:21.125 "zone_management": false, 00:16:21.125 "zone_append": false, 00:16:21.125 "compare": false, 00:16:21.125 "compare_and_write": false, 00:16:21.125 "abort": true, 00:16:21.125 "seek_hole": false, 00:16:21.125 "seek_data": false, 00:16:21.125 "copy": true, 00:16:21.125 "nvme_iov_md": false 00:16:21.125 }, 00:16:21.125 "memory_domains": [ 00:16:21.125 { 00:16:21.125 "dma_device_id": "system", 00:16:21.125 "dma_device_type": 1 00:16:21.125 }, 00:16:21.125 { 00:16:21.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.125 "dma_device_type": 2 00:16:21.125 } 00:16:21.125 ], 00:16:21.125 "driver_specific": {} 00:16:21.125 } 00:16:21.125 ] 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.125 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.383 "name": "Existed_Raid", 00:16:21.383 "uuid": "22c1ea9a-3920-4690-9c10-c502856e9822", 00:16:21.383 "strip_size_kb": 0, 00:16:21.383 "state": "online", 00:16:21.383 "raid_level": "raid1", 00:16:21.383 "superblock": true, 00:16:21.383 "num_base_bdevs": 2, 00:16:21.383 "num_base_bdevs_discovered": 2, 00:16:21.383 "num_base_bdevs_operational": 2, 00:16:21.383 "base_bdevs_list": [ 00:16:21.383 { 00:16:21.383 "name": "BaseBdev1", 00:16:21.383 "uuid": "913a2d40-db64-4a75-a53b-aa81c50f6216", 00:16:21.383 "is_configured": true, 00:16:21.383 "data_offset": 256, 00:16:21.383 "data_size": 7936 00:16:21.383 }, 00:16:21.383 { 00:16:21.383 "name": "BaseBdev2", 00:16:21.383 "uuid": "949ddac1-04d1-4c8e-a495-b774f2512db2", 00:16:21.383 "is_configured": true, 00:16:21.383 "data_offset": 256, 00:16:21.383 "data_size": 7936 00:16:21.383 } 00:16:21.383 ] 00:16:21.384 }' 00:16:21.384 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.384 05:02:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.642 [2024-11-21 05:02:38.303229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.642 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.642 "name": "Existed_Raid", 00:16:21.642 "aliases": [ 00:16:21.642 "22c1ea9a-3920-4690-9c10-c502856e9822" 00:16:21.642 ], 00:16:21.642 "product_name": "Raid Volume", 00:16:21.642 "block_size": 4128, 00:16:21.642 "num_blocks": 7936, 00:16:21.642 "uuid": "22c1ea9a-3920-4690-9c10-c502856e9822", 00:16:21.642 "md_size": 32, 00:16:21.642 "md_interleave": true, 00:16:21.642 "dif_type": 0, 00:16:21.642 "assigned_rate_limits": { 00:16:21.642 "rw_ios_per_sec": 0, 00:16:21.642 "rw_mbytes_per_sec": 0, 00:16:21.642 "r_mbytes_per_sec": 0, 00:16:21.642 "w_mbytes_per_sec": 0 00:16:21.642 }, 00:16:21.642 "claimed": false, 00:16:21.642 "zoned": false, 00:16:21.642 "supported_io_types": { 00:16:21.642 "read": true, 00:16:21.642 "write": true, 00:16:21.642 "unmap": false, 00:16:21.642 "flush": false, 00:16:21.642 "reset": true, 00:16:21.642 "nvme_admin": false, 00:16:21.642 "nvme_io": false, 00:16:21.642 "nvme_io_md": false, 00:16:21.642 "write_zeroes": true, 00:16:21.642 "zcopy": false, 00:16:21.642 "get_zone_info": false, 00:16:21.642 "zone_management": false, 00:16:21.642 "zone_append": false, 00:16:21.642 "compare": false, 00:16:21.642 "compare_and_write": false, 00:16:21.642 "abort": false, 00:16:21.642 "seek_hole": false, 00:16:21.642 "seek_data": false, 00:16:21.642 "copy": false, 00:16:21.642 "nvme_iov_md": false 00:16:21.642 }, 00:16:21.642 "memory_domains": [ 00:16:21.642 { 00:16:21.642 "dma_device_id": "system", 00:16:21.642 "dma_device_type": 1 00:16:21.642 }, 00:16:21.642 { 00:16:21.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.642 "dma_device_type": 2 00:16:21.642 }, 00:16:21.642 { 00:16:21.642 "dma_device_id": "system", 00:16:21.642 "dma_device_type": 1 00:16:21.642 }, 00:16:21.642 { 00:16:21.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.642 "dma_device_type": 2 00:16:21.642 } 00:16:21.642 ], 00:16:21.642 "driver_specific": { 00:16:21.642 "raid": { 00:16:21.642 "uuid": "22c1ea9a-3920-4690-9c10-c502856e9822", 00:16:21.642 "strip_size_kb": 0, 00:16:21.642 "state": "online", 00:16:21.642 "raid_level": "raid1", 00:16:21.642 "superblock": true, 00:16:21.643 "num_base_bdevs": 2, 00:16:21.643 "num_base_bdevs_discovered": 2, 00:16:21.643 "num_base_bdevs_operational": 2, 00:16:21.643 "base_bdevs_list": [ 00:16:21.643 { 00:16:21.643 "name": "BaseBdev1", 00:16:21.643 "uuid": "913a2d40-db64-4a75-a53b-aa81c50f6216", 00:16:21.643 "is_configured": true, 00:16:21.643 "data_offset": 256, 00:16:21.643 "data_size": 7936 00:16:21.643 }, 00:16:21.643 { 00:16:21.643 "name": "BaseBdev2", 00:16:21.643 "uuid": "949ddac1-04d1-4c8e-a495-b774f2512db2", 00:16:21.643 "is_configured": true, 00:16:21.643 "data_offset": 256, 00:16:21.643 "data_size": 7936 00:16:21.643 } 00:16:21.643 ] 00:16:21.643 } 00:16:21.643 } 00:16:21.643 }' 00:16:21.643 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:21.902 BaseBdev2' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.902 [2024-11-21 05:02:38.526688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.902 "name": "Existed_Raid", 00:16:21.902 "uuid": "22c1ea9a-3920-4690-9c10-c502856e9822", 00:16:21.902 "strip_size_kb": 0, 00:16:21.902 "state": "online", 00:16:21.902 "raid_level": "raid1", 00:16:21.902 "superblock": true, 00:16:21.902 "num_base_bdevs": 2, 00:16:21.902 "num_base_bdevs_discovered": 1, 00:16:21.902 "num_base_bdevs_operational": 1, 00:16:21.902 "base_bdevs_list": [ 00:16:21.902 { 00:16:21.902 "name": null, 00:16:21.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.902 "is_configured": false, 00:16:21.902 "data_offset": 0, 00:16:21.902 "data_size": 7936 00:16:21.902 }, 00:16:21.902 { 00:16:21.902 "name": "BaseBdev2", 00:16:21.902 "uuid": "949ddac1-04d1-4c8e-a495-b774f2512db2", 00:16:21.902 "is_configured": true, 00:16:21.902 "data_offset": 256, 00:16:21.902 "data_size": 7936 00:16:21.902 } 00:16:21.902 ] 00:16:21.902 }' 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.902 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.507 05:02:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.507 [2024-11-21 05:02:38.993889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.507 [2024-11-21 05:02:38.994040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.507 [2024-11-21 05:02:39.006336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.507 [2024-11-21 05:02:39.006437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.507 [2024-11-21 05:02:39.006479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98946 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98946 ']' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98946 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98946 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98946' 00:16:22.507 killing process with pid 98946 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 98946 00:16:22.507 [2024-11-21 05:02:39.106276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.507 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 98946 00:16:22.507 [2024-11-21 05:02:39.107301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.773 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:22.773 00:16:22.773 real 0m3.924s 00:16:22.773 user 0m6.143s 00:16:22.773 sys 0m0.872s 00:16:22.773 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.773 05:02:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.773 ************************************ 00:16:22.773 END TEST raid_state_function_test_sb_md_interleaved 00:16:22.773 ************************************ 00:16:22.773 05:02:39 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:22.773 05:02:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:22.773 05:02:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.773 05:02:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.773 ************************************ 00:16:22.773 START TEST raid_superblock_test_md_interleaved 00:16:22.773 ************************************ 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99182 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99182 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99182 ']' 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.773 05:02:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.773 [2024-11-21 05:02:39.492891] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:22.773 [2024-11-21 05:02:39.493006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99182 ] 00:16:23.033 [2024-11-21 05:02:39.662765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.033 [2024-11-21 05:02:39.688718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.033 [2024-11-21 05:02:39.731671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.033 [2024-11-21 05:02:39.731733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.602 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.862 malloc1 00:16:23.862 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.862 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.862 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.862 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.862 [2024-11-21 05:02:40.346302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.862 [2024-11-21 05:02:40.346357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.862 [2024-11-21 05:02:40.346389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:23.862 [2024-11-21 05:02:40.346402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.862 [2024-11-21 05:02:40.348327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.863 [2024-11-21 05:02:40.348371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.863 pt1 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 malloc2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 [2024-11-21 05:02:40.374991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.863 [2024-11-21 05:02:40.375049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.863 [2024-11-21 05:02:40.375066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:23.863 [2024-11-21 05:02:40.375075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.863 [2024-11-21 05:02:40.376929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.863 [2024-11-21 05:02:40.376966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.863 pt2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 [2024-11-21 05:02:40.387006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.863 [2024-11-21 05:02:40.388860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.863 [2024-11-21 05:02:40.389001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:23.863 [2024-11-21 05:02:40.389020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:23.863 [2024-11-21 05:02:40.389108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:23.863 [2024-11-21 05:02:40.389181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:23.863 [2024-11-21 05:02:40.389192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:23.863 [2024-11-21 05:02:40.389263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.863 "name": "raid_bdev1", 00:16:23.863 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:23.863 "strip_size_kb": 0, 00:16:23.863 "state": "online", 00:16:23.863 "raid_level": "raid1", 00:16:23.863 "superblock": true, 00:16:23.863 "num_base_bdevs": 2, 00:16:23.863 "num_base_bdevs_discovered": 2, 00:16:23.863 "num_base_bdevs_operational": 2, 00:16:23.863 "base_bdevs_list": [ 00:16:23.863 { 00:16:23.863 "name": "pt1", 00:16:23.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.863 "is_configured": true, 00:16:23.863 "data_offset": 256, 00:16:23.863 "data_size": 7936 00:16:23.863 }, 00:16:23.863 { 00:16:23.863 "name": "pt2", 00:16:23.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.863 "is_configured": true, 00:16:23.863 "data_offset": 256, 00:16:23.863 "data_size": 7936 00:16:23.863 } 00:16:23.863 ] 00:16:23.863 }' 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.863 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 [2024-11-21 05:02:40.878463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.433 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.433 "name": "raid_bdev1", 00:16:24.433 "aliases": [ 00:16:24.433 "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d" 00:16:24.433 ], 00:16:24.433 "product_name": "Raid Volume", 00:16:24.433 "block_size": 4128, 00:16:24.433 "num_blocks": 7936, 00:16:24.433 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:24.433 "md_size": 32, 00:16:24.433 "md_interleave": true, 00:16:24.433 "dif_type": 0, 00:16:24.433 "assigned_rate_limits": { 00:16:24.433 "rw_ios_per_sec": 0, 00:16:24.433 "rw_mbytes_per_sec": 0, 00:16:24.433 "r_mbytes_per_sec": 0, 00:16:24.433 "w_mbytes_per_sec": 0 00:16:24.433 }, 00:16:24.433 "claimed": false, 00:16:24.433 "zoned": false, 00:16:24.433 "supported_io_types": { 00:16:24.433 "read": true, 00:16:24.433 "write": true, 00:16:24.433 "unmap": false, 00:16:24.433 "flush": false, 00:16:24.433 "reset": true, 00:16:24.433 "nvme_admin": false, 00:16:24.433 "nvme_io": false, 00:16:24.433 "nvme_io_md": false, 00:16:24.433 "write_zeroes": true, 00:16:24.433 "zcopy": false, 00:16:24.433 "get_zone_info": false, 00:16:24.433 "zone_management": false, 00:16:24.433 "zone_append": false, 00:16:24.433 "compare": false, 00:16:24.433 "compare_and_write": false, 00:16:24.433 "abort": false, 00:16:24.433 "seek_hole": false, 00:16:24.433 "seek_data": false, 00:16:24.433 "copy": false, 00:16:24.433 "nvme_iov_md": false 00:16:24.433 }, 00:16:24.433 "memory_domains": [ 00:16:24.433 { 00:16:24.433 "dma_device_id": "system", 00:16:24.433 "dma_device_type": 1 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.433 "dma_device_type": 2 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "dma_device_id": "system", 00:16:24.433 "dma_device_type": 1 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.433 "dma_device_type": 2 00:16:24.433 } 00:16:24.433 ], 00:16:24.433 "driver_specific": { 00:16:24.433 "raid": { 00:16:24.433 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:24.433 "strip_size_kb": 0, 00:16:24.433 "state": "online", 00:16:24.433 "raid_level": "raid1", 00:16:24.433 "superblock": true, 00:16:24.433 "num_base_bdevs": 2, 00:16:24.433 "num_base_bdevs_discovered": 2, 00:16:24.433 "num_base_bdevs_operational": 2, 00:16:24.433 "base_bdevs_list": [ 00:16:24.434 { 00:16:24.434 "name": "pt1", 00:16:24.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.434 "is_configured": true, 00:16:24.434 "data_offset": 256, 00:16:24.434 "data_size": 7936 00:16:24.434 }, 00:16:24.434 { 00:16:24.434 "name": "pt2", 00:16:24.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.434 "is_configured": true, 00:16:24.434 "data_offset": 256, 00:16:24.434 "data_size": 7936 00:16:24.434 } 00:16:24.434 ] 00:16:24.434 } 00:16:24.434 } 00:16:24.434 }' 00:16:24.434 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.434 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:24.434 pt2' 00:16:24.434 05:02:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 [2024-11-21 05:02:41.121949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d ']' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 [2024-11-21 05:02:41.153646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.434 [2024-11-21 05:02:41.153672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.434 [2024-11-21 05:02:41.153738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.434 [2024-11-21 05:02:41.153802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.434 [2024-11-21 05:02:41.153812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.434 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.694 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.694 [2024-11-21 05:02:41.285459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:24.694 [2024-11-21 05:02:41.287346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:24.695 [2024-11-21 05:02:41.287409] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:24.695 [2024-11-21 05:02:41.287445] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:24.695 [2024-11-21 05:02:41.287459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.695 [2024-11-21 05:02:41.287467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:24.695 request: 00:16:24.695 { 00:16:24.695 "name": "raid_bdev1", 00:16:24.695 "raid_level": "raid1", 00:16:24.695 "base_bdevs": [ 00:16:24.695 "malloc1", 00:16:24.695 "malloc2" 00:16:24.695 ], 00:16:24.695 "superblock": false, 00:16:24.695 "method": "bdev_raid_create", 00:16:24.695 "req_id": 1 00:16:24.695 } 00:16:24.695 Got JSON-RPC error response 00:16:24.695 response: 00:16:24.695 { 00:16:24.695 "code": -17, 00:16:24.695 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:24.695 } 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.695 [2024-11-21 05:02:41.349299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.695 [2024-11-21 05:02:41.349342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.695 [2024-11-21 05:02:41.349357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:24.695 [2024-11-21 05:02:41.349365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.695 [2024-11-21 05:02:41.351185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.695 [2024-11-21 05:02:41.351213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.695 [2024-11-21 05:02:41.351252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:24.695 [2024-11-21 05:02:41.351307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.695 pt1 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.695 "name": "raid_bdev1", 00:16:24.695 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:24.695 "strip_size_kb": 0, 00:16:24.695 "state": "configuring", 00:16:24.695 "raid_level": "raid1", 00:16:24.695 "superblock": true, 00:16:24.695 "num_base_bdevs": 2, 00:16:24.695 "num_base_bdevs_discovered": 1, 00:16:24.695 "num_base_bdevs_operational": 2, 00:16:24.695 "base_bdevs_list": [ 00:16:24.695 { 00:16:24.695 "name": "pt1", 00:16:24.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.695 "is_configured": true, 00:16:24.695 "data_offset": 256, 00:16:24.695 "data_size": 7936 00:16:24.695 }, 00:16:24.695 { 00:16:24.695 "name": null, 00:16:24.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.695 "is_configured": false, 00:16:24.695 "data_offset": 256, 00:16:24.695 "data_size": 7936 00:16:24.695 } 00:16:24.695 ] 00:16:24.695 }' 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.695 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 [2024-11-21 05:02:41.804545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.265 [2024-11-21 05:02:41.804598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.265 [2024-11-21 05:02:41.804618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:25.265 [2024-11-21 05:02:41.804627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.265 [2024-11-21 05:02:41.804749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.265 [2024-11-21 05:02:41.804761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.265 [2024-11-21 05:02:41.804812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:25.265 [2024-11-21 05:02:41.804828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.265 [2024-11-21 05:02:41.804909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:25.265 [2024-11-21 05:02:41.804919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:25.265 [2024-11-21 05:02:41.805040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:25.265 [2024-11-21 05:02:41.805121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:25.265 [2024-11-21 05:02:41.805136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:25.265 [2024-11-21 05:02:41.805211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.265 pt2 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.265 "name": "raid_bdev1", 00:16:25.265 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:25.265 "strip_size_kb": 0, 00:16:25.265 "state": "online", 00:16:25.265 "raid_level": "raid1", 00:16:25.265 "superblock": true, 00:16:25.265 "num_base_bdevs": 2, 00:16:25.265 "num_base_bdevs_discovered": 2, 00:16:25.265 "num_base_bdevs_operational": 2, 00:16:25.265 "base_bdevs_list": [ 00:16:25.265 { 00:16:25.265 "name": "pt1", 00:16:25.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.265 "is_configured": true, 00:16:25.265 "data_offset": 256, 00:16:25.265 "data_size": 7936 00:16:25.265 }, 00:16:25.265 { 00:16:25.265 "name": "pt2", 00:16:25.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.265 "is_configured": true, 00:16:25.265 "data_offset": 256, 00:16:25.265 "data_size": 7936 00:16:25.265 } 00:16:25.265 ] 00:16:25.265 }' 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.265 05:02:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.835 [2024-11-21 05:02:42.275978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.835 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.835 "name": "raid_bdev1", 00:16:25.835 "aliases": [ 00:16:25.835 "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d" 00:16:25.835 ], 00:16:25.835 "product_name": "Raid Volume", 00:16:25.835 "block_size": 4128, 00:16:25.835 "num_blocks": 7936, 00:16:25.835 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:25.835 "md_size": 32, 00:16:25.835 "md_interleave": true, 00:16:25.835 "dif_type": 0, 00:16:25.835 "assigned_rate_limits": { 00:16:25.835 "rw_ios_per_sec": 0, 00:16:25.835 "rw_mbytes_per_sec": 0, 00:16:25.835 "r_mbytes_per_sec": 0, 00:16:25.835 "w_mbytes_per_sec": 0 00:16:25.835 }, 00:16:25.835 "claimed": false, 00:16:25.835 "zoned": false, 00:16:25.835 "supported_io_types": { 00:16:25.835 "read": true, 00:16:25.835 "write": true, 00:16:25.835 "unmap": false, 00:16:25.835 "flush": false, 00:16:25.835 "reset": true, 00:16:25.835 "nvme_admin": false, 00:16:25.835 "nvme_io": false, 00:16:25.835 "nvme_io_md": false, 00:16:25.835 "write_zeroes": true, 00:16:25.835 "zcopy": false, 00:16:25.835 "get_zone_info": false, 00:16:25.835 "zone_management": false, 00:16:25.835 "zone_append": false, 00:16:25.835 "compare": false, 00:16:25.835 "compare_and_write": false, 00:16:25.835 "abort": false, 00:16:25.835 "seek_hole": false, 00:16:25.835 "seek_data": false, 00:16:25.835 "copy": false, 00:16:25.835 "nvme_iov_md": false 00:16:25.835 }, 00:16:25.835 "memory_domains": [ 00:16:25.835 { 00:16:25.835 "dma_device_id": "system", 00:16:25.835 "dma_device_type": 1 00:16:25.835 }, 00:16:25.835 { 00:16:25.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.835 "dma_device_type": 2 00:16:25.836 }, 00:16:25.836 { 00:16:25.836 "dma_device_id": "system", 00:16:25.836 "dma_device_type": 1 00:16:25.836 }, 00:16:25.836 { 00:16:25.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.836 "dma_device_type": 2 00:16:25.836 } 00:16:25.836 ], 00:16:25.836 "driver_specific": { 00:16:25.836 "raid": { 00:16:25.836 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:25.836 "strip_size_kb": 0, 00:16:25.836 "state": "online", 00:16:25.836 "raid_level": "raid1", 00:16:25.836 "superblock": true, 00:16:25.836 "num_base_bdevs": 2, 00:16:25.836 "num_base_bdevs_discovered": 2, 00:16:25.836 "num_base_bdevs_operational": 2, 00:16:25.836 "base_bdevs_list": [ 00:16:25.836 { 00:16:25.836 "name": "pt1", 00:16:25.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.836 "is_configured": true, 00:16:25.836 "data_offset": 256, 00:16:25.836 "data_size": 7936 00:16:25.836 }, 00:16:25.836 { 00:16:25.836 "name": "pt2", 00:16:25.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.836 "is_configured": true, 00:16:25.836 "data_offset": 256, 00:16:25.836 "data_size": 7936 00:16:25.836 } 00:16:25.836 ] 00:16:25.836 } 00:16:25.836 } 00:16:25.836 }' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:25.836 pt2' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 [2024-11-21 05:02:42.471630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d '!=' b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d ']' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 [2024-11-21 05:02:42.503360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.836 "name": "raid_bdev1", 00:16:25.836 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:25.836 "strip_size_kb": 0, 00:16:25.836 "state": "online", 00:16:25.836 "raid_level": "raid1", 00:16:25.836 "superblock": true, 00:16:25.836 "num_base_bdevs": 2, 00:16:25.836 "num_base_bdevs_discovered": 1, 00:16:25.836 "num_base_bdevs_operational": 1, 00:16:25.836 "base_bdevs_list": [ 00:16:25.836 { 00:16:25.836 "name": null, 00:16:25.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.836 "is_configured": false, 00:16:25.836 "data_offset": 0, 00:16:25.836 "data_size": 7936 00:16:25.836 }, 00:16:25.836 { 00:16:25.836 "name": "pt2", 00:16:25.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.836 "is_configured": true, 00:16:25.836 "data_offset": 256, 00:16:25.836 "data_size": 7936 00:16:25.836 } 00:16:25.836 ] 00:16:25.836 }' 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.836 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.406 [2024-11-21 05:02:42.962580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.406 [2024-11-21 05:02:42.962606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.406 [2024-11-21 05:02:42.962659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.406 [2024-11-21 05:02:42.962695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.406 [2024-11-21 05:02:42.962712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:26.406 05:02:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:26.406 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 [2024-11-21 05:02:43.022502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.407 [2024-11-21 05:02:43.022546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.407 [2024-11-21 05:02:43.022562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:26.407 [2024-11-21 05:02:43.022570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.407 [2024-11-21 05:02:43.024388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.407 [2024-11-21 05:02:43.024422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.407 [2024-11-21 05:02:43.024466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:26.407 [2024-11-21 05:02:43.024492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.407 [2024-11-21 05:02:43.024542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:26.407 [2024-11-21 05:02:43.024549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:26.407 [2024-11-21 05:02:43.024630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:26.407 [2024-11-21 05:02:43.024702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:26.407 [2024-11-21 05:02:43.024713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:26.407 [2024-11-21 05:02:43.024771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.407 pt2 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.407 "name": "raid_bdev1", 00:16:26.407 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:26.407 "strip_size_kb": 0, 00:16:26.407 "state": "online", 00:16:26.407 "raid_level": "raid1", 00:16:26.407 "superblock": true, 00:16:26.407 "num_base_bdevs": 2, 00:16:26.407 "num_base_bdevs_discovered": 1, 00:16:26.407 "num_base_bdevs_operational": 1, 00:16:26.407 "base_bdevs_list": [ 00:16:26.407 { 00:16:26.407 "name": null, 00:16:26.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.407 "is_configured": false, 00:16:26.407 "data_offset": 256, 00:16:26.407 "data_size": 7936 00:16:26.407 }, 00:16:26.407 { 00:16:26.407 "name": "pt2", 00:16:26.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.407 "is_configured": true, 00:16:26.407 "data_offset": 256, 00:16:26.407 "data_size": 7936 00:16:26.407 } 00:16:26.407 ] 00:16:26.407 }' 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.407 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 [2024-11-21 05:02:43.509635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.977 [2024-11-21 05:02:43.509661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.977 [2024-11-21 05:02:43.509706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.977 [2024-11-21 05:02:43.509740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.977 [2024-11-21 05:02:43.509750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.977 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 [2024-11-21 05:02:43.553553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.977 [2024-11-21 05:02:43.553600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.977 [2024-11-21 05:02:43.553617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:26.977 [2024-11-21 05:02:43.553632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.977 [2024-11-21 05:02:43.555534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.977 [2024-11-21 05:02:43.555571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.977 [2024-11-21 05:02:43.555621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:26.977 [2024-11-21 05:02:43.555655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.978 [2024-11-21 05:02:43.555720] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:26.978 [2024-11-21 05:02:43.555731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.978 [2024-11-21 05:02:43.555752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:26.978 [2024-11-21 05:02:43.555793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.978 [2024-11-21 05:02:43.555844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:26.978 [2024-11-21 05:02:43.555859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:26.978 [2024-11-21 05:02:43.555913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:26.978 [2024-11-21 05:02:43.555965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:26.978 [2024-11-21 05:02:43.555988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:26.978 [2024-11-21 05:02:43.556051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.978 pt1 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.978 "name": "raid_bdev1", 00:16:26.978 "uuid": "b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d", 00:16:26.978 "strip_size_kb": 0, 00:16:26.978 "state": "online", 00:16:26.978 "raid_level": "raid1", 00:16:26.978 "superblock": true, 00:16:26.978 "num_base_bdevs": 2, 00:16:26.978 "num_base_bdevs_discovered": 1, 00:16:26.978 "num_base_bdevs_operational": 1, 00:16:26.978 "base_bdevs_list": [ 00:16:26.978 { 00:16:26.978 "name": null, 00:16:26.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.978 "is_configured": false, 00:16:26.978 "data_offset": 256, 00:16:26.978 "data_size": 7936 00:16:26.978 }, 00:16:26.978 { 00:16:26.978 "name": "pt2", 00:16:26.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.978 "is_configured": true, 00:16:26.978 "data_offset": 256, 00:16:26.978 "data_size": 7936 00:16:26.978 } 00:16:26.978 ] 00:16:26.978 }' 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.978 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.238 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:27.238 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:27.238 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.498 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 05:02:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 [2024-11-21 05:02:44.028968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d '!=' b9340fe3-d578-43fc-9bd4-1fd1aaa69c5d ']' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99182 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99182 ']' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99182 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99182 00:16:27.498 killing process with pid 99182 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99182' 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 99182 00:16:27.498 [2024-11-21 05:02:44.086901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.498 [2024-11-21 05:02:44.086964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.498 [2024-11-21 05:02:44.087001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.498 [2024-11-21 05:02:44.087009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:27.498 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 99182 00:16:27.498 [2024-11-21 05:02:44.110763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.759 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:27.759 00:16:27.759 real 0m4.920s 00:16:27.759 user 0m8.001s 00:16:27.759 sys 0m1.121s 00:16:27.759 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.759 ************************************ 00:16:27.759 END TEST raid_superblock_test_md_interleaved 00:16:27.759 ************************************ 00:16:27.759 05:02:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.759 05:02:44 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:27.759 05:02:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:27.759 05:02:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.759 05:02:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.759 ************************************ 00:16:27.759 START TEST raid_rebuild_test_sb_md_interleaved 00:16:27.759 ************************************ 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99496 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99496 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99496 ']' 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.759 05:02:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.019 [2024-11-21 05:02:44.516664] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:28.019 [2024-11-21 05:02:44.516920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:28.019 Zero copy mechanism will not be used. 00:16:28.019 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99496 ] 00:16:28.019 [2024-11-21 05:02:44.692019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.019 [2024-11-21 05:02:44.718612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.279 [2024-11-21 05:02:44.762394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.279 [2024-11-21 05:02:44.762511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.849 BaseBdev1_malloc 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.849 [2024-11-21 05:02:45.344745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:28.849 [2024-11-21 05:02:45.344803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.849 [2024-11-21 05:02:45.344841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.849 [2024-11-21 05:02:45.344856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.849 [2024-11-21 05:02:45.346736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.849 [2024-11-21 05:02:45.346831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.849 BaseBdev1 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.849 BaseBdev2_malloc 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.849 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.849 [2024-11-21 05:02:45.374645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:28.849 [2024-11-21 05:02:45.374892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.849 [2024-11-21 05:02:45.374939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.850 [2024-11-21 05:02:45.374998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.850 [2024-11-21 05:02:45.377085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.850 [2024-11-21 05:02:45.377159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:28.850 BaseBdev2 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.850 spare_malloc 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.850 spare_delay 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.850 [2024-11-21 05:02:45.414490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:28.850 [2024-11-21 05:02:45.414537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.850 [2024-11-21 05:02:45.414557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:28.850 [2024-11-21 05:02:45.414565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.850 [2024-11-21 05:02:45.416399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.850 [2024-11-21 05:02:45.416479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:28.850 spare 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.850 [2024-11-21 05:02:45.426491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.850 [2024-11-21 05:02:45.428311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.850 [2024-11-21 05:02:45.428471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:28.850 [2024-11-21 05:02:45.428484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:28.850 [2024-11-21 05:02:45.428567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:28.850 [2024-11-21 05:02:45.428629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:28.850 [2024-11-21 05:02:45.428646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:28.850 [2024-11-21 05:02:45.428704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.850 "name": "raid_bdev1", 00:16:28.850 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:28.850 "strip_size_kb": 0, 00:16:28.850 "state": "online", 00:16:28.850 "raid_level": "raid1", 00:16:28.850 "superblock": true, 00:16:28.850 "num_base_bdevs": 2, 00:16:28.850 "num_base_bdevs_discovered": 2, 00:16:28.850 "num_base_bdevs_operational": 2, 00:16:28.850 "base_bdevs_list": [ 00:16:28.850 { 00:16:28.850 "name": "BaseBdev1", 00:16:28.850 "uuid": "cde2f058-366a-54e4-8a8e-487c594569e3", 00:16:28.850 "is_configured": true, 00:16:28.850 "data_offset": 256, 00:16:28.850 "data_size": 7936 00:16:28.850 }, 00:16:28.850 { 00:16:28.850 "name": "BaseBdev2", 00:16:28.850 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:28.850 "is_configured": true, 00:16:28.850 "data_offset": 256, 00:16:28.850 "data_size": 7936 00:16:28.850 } 00:16:28.850 ] 00:16:28.850 }' 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.850 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 [2024-11-21 05:02:45.885969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 [2024-11-21 05:02:45.985532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.420 05:02:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.420 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.420 "name": "raid_bdev1", 00:16:29.420 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:29.420 "strip_size_kb": 0, 00:16:29.420 "state": "online", 00:16:29.420 "raid_level": "raid1", 00:16:29.420 "superblock": true, 00:16:29.420 "num_base_bdevs": 2, 00:16:29.420 "num_base_bdevs_discovered": 1, 00:16:29.420 "num_base_bdevs_operational": 1, 00:16:29.420 "base_bdevs_list": [ 00:16:29.420 { 00:16:29.420 "name": null, 00:16:29.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.420 "is_configured": false, 00:16:29.420 "data_offset": 0, 00:16:29.420 "data_size": 7936 00:16:29.420 }, 00:16:29.420 { 00:16:29.420 "name": "BaseBdev2", 00:16:29.420 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:29.420 "is_configured": true, 00:16:29.420 "data_offset": 256, 00:16:29.420 "data_size": 7936 00:16:29.420 } 00:16:29.420 ] 00:16:29.420 }' 00:16:29.420 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.420 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.680 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.680 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.680 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.680 [2024-11-21 05:02:46.376902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.680 [2024-11-21 05:02:46.393651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:29.680 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.680 05:02:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:29.680 [2024-11-21 05:02:46.400757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.063 "name": "raid_bdev1", 00:16:31.063 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:31.063 "strip_size_kb": 0, 00:16:31.063 "state": "online", 00:16:31.063 "raid_level": "raid1", 00:16:31.063 "superblock": true, 00:16:31.063 "num_base_bdevs": 2, 00:16:31.063 "num_base_bdevs_discovered": 2, 00:16:31.063 "num_base_bdevs_operational": 2, 00:16:31.063 "process": { 00:16:31.063 "type": "rebuild", 00:16:31.063 "target": "spare", 00:16:31.063 "progress": { 00:16:31.063 "blocks": 2560, 00:16:31.063 "percent": 32 00:16:31.063 } 00:16:31.063 }, 00:16:31.063 "base_bdevs_list": [ 00:16:31.063 { 00:16:31.063 "name": "spare", 00:16:31.063 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:31.063 "is_configured": true, 00:16:31.063 "data_offset": 256, 00:16:31.063 "data_size": 7936 00:16:31.063 }, 00:16:31.063 { 00:16:31.063 "name": "BaseBdev2", 00:16:31.063 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:31.063 "is_configured": true, 00:16:31.063 "data_offset": 256, 00:16:31.063 "data_size": 7936 00:16:31.063 } 00:16:31.063 ] 00:16:31.063 }' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.063 [2024-11-21 05:02:47.559719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.063 [2024-11-21 05:02:47.606579] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.063 [2024-11-21 05:02:47.606628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.063 [2024-11-21 05:02:47.606644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.063 [2024-11-21 05:02:47.606660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.063 "name": "raid_bdev1", 00:16:31.063 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:31.063 "strip_size_kb": 0, 00:16:31.063 "state": "online", 00:16:31.063 "raid_level": "raid1", 00:16:31.063 "superblock": true, 00:16:31.063 "num_base_bdevs": 2, 00:16:31.063 "num_base_bdevs_discovered": 1, 00:16:31.063 "num_base_bdevs_operational": 1, 00:16:31.063 "base_bdevs_list": [ 00:16:31.063 { 00:16:31.063 "name": null, 00:16:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.063 "is_configured": false, 00:16:31.063 "data_offset": 0, 00:16:31.063 "data_size": 7936 00:16:31.063 }, 00:16:31.063 { 00:16:31.063 "name": "BaseBdev2", 00:16:31.063 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:31.063 "is_configured": true, 00:16:31.063 "data_offset": 256, 00:16:31.063 "data_size": 7936 00:16:31.063 } 00:16:31.063 ] 00:16:31.063 }' 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.063 05:02:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.633 "name": "raid_bdev1", 00:16:31.633 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:31.633 "strip_size_kb": 0, 00:16:31.633 "state": "online", 00:16:31.633 "raid_level": "raid1", 00:16:31.633 "superblock": true, 00:16:31.633 "num_base_bdevs": 2, 00:16:31.633 "num_base_bdevs_discovered": 1, 00:16:31.633 "num_base_bdevs_operational": 1, 00:16:31.633 "base_bdevs_list": [ 00:16:31.633 { 00:16:31.633 "name": null, 00:16:31.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.633 "is_configured": false, 00:16:31.633 "data_offset": 0, 00:16:31.633 "data_size": 7936 00:16:31.633 }, 00:16:31.633 { 00:16:31.633 "name": "BaseBdev2", 00:16:31.633 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:31.633 "is_configured": true, 00:16:31.633 "data_offset": 256, 00:16:31.633 "data_size": 7936 00:16:31.633 } 00:16:31.633 ] 00:16:31.633 }' 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.633 [2024-11-21 05:02:48.245613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.633 [2024-11-21 05:02:48.248852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.633 [2024-11-21 05:02:48.250595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.633 05:02:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.574 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.575 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.575 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.575 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.835 "name": "raid_bdev1", 00:16:32.835 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:32.835 "strip_size_kb": 0, 00:16:32.835 "state": "online", 00:16:32.835 "raid_level": "raid1", 00:16:32.835 "superblock": true, 00:16:32.835 "num_base_bdevs": 2, 00:16:32.835 "num_base_bdevs_discovered": 2, 00:16:32.835 "num_base_bdevs_operational": 2, 00:16:32.835 "process": { 00:16:32.835 "type": "rebuild", 00:16:32.835 "target": "spare", 00:16:32.835 "progress": { 00:16:32.835 "blocks": 2560, 00:16:32.835 "percent": 32 00:16:32.835 } 00:16:32.835 }, 00:16:32.835 "base_bdevs_list": [ 00:16:32.835 { 00:16:32.835 "name": "spare", 00:16:32.835 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:32.835 "is_configured": true, 00:16:32.835 "data_offset": 256, 00:16:32.835 "data_size": 7936 00:16:32.835 }, 00:16:32.835 { 00:16:32.835 "name": "BaseBdev2", 00:16:32.835 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:32.835 "is_configured": true, 00:16:32.835 "data_offset": 256, 00:16:32.835 "data_size": 7936 00:16:32.835 } 00:16:32.835 ] 00:16:32.835 }' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:32.835 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.835 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.835 "name": "raid_bdev1", 00:16:32.835 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:32.835 "strip_size_kb": 0, 00:16:32.835 "state": "online", 00:16:32.835 "raid_level": "raid1", 00:16:32.835 "superblock": true, 00:16:32.836 "num_base_bdevs": 2, 00:16:32.836 "num_base_bdevs_discovered": 2, 00:16:32.836 "num_base_bdevs_operational": 2, 00:16:32.836 "process": { 00:16:32.836 "type": "rebuild", 00:16:32.836 "target": "spare", 00:16:32.836 "progress": { 00:16:32.836 "blocks": 2816, 00:16:32.836 "percent": 35 00:16:32.836 } 00:16:32.836 }, 00:16:32.836 "base_bdevs_list": [ 00:16:32.836 { 00:16:32.836 "name": "spare", 00:16:32.836 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:32.836 "is_configured": true, 00:16:32.836 "data_offset": 256, 00:16:32.836 "data_size": 7936 00:16:32.836 }, 00:16:32.836 { 00:16:32.836 "name": "BaseBdev2", 00:16:32.836 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:32.836 "is_configured": true, 00:16:32.836 "data_offset": 256, 00:16:32.836 "data_size": 7936 00:16:32.836 } 00:16:32.836 ] 00:16:32.836 }' 00:16:32.836 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.836 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.836 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.836 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.836 05:02:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.216 "name": "raid_bdev1", 00:16:34.216 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:34.216 "strip_size_kb": 0, 00:16:34.216 "state": "online", 00:16:34.216 "raid_level": "raid1", 00:16:34.216 "superblock": true, 00:16:34.216 "num_base_bdevs": 2, 00:16:34.216 "num_base_bdevs_discovered": 2, 00:16:34.216 "num_base_bdevs_operational": 2, 00:16:34.216 "process": { 00:16:34.216 "type": "rebuild", 00:16:34.216 "target": "spare", 00:16:34.216 "progress": { 00:16:34.216 "blocks": 5888, 00:16:34.216 "percent": 74 00:16:34.216 } 00:16:34.216 }, 00:16:34.216 "base_bdevs_list": [ 00:16:34.216 { 00:16:34.216 "name": "spare", 00:16:34.216 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:34.216 "is_configured": true, 00:16:34.216 "data_offset": 256, 00:16:34.216 "data_size": 7936 00:16:34.216 }, 00:16:34.216 { 00:16:34.216 "name": "BaseBdev2", 00:16:34.216 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:34.216 "is_configured": true, 00:16:34.216 "data_offset": 256, 00:16:34.216 "data_size": 7936 00:16:34.216 } 00:16:34.216 ] 00:16:34.216 }' 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.216 05:02:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.787 [2024-11-21 05:02:51.361368] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:34.787 [2024-11-21 05:02:51.361437] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:34.787 [2024-11-21 05:02:51.361530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.047 "name": "raid_bdev1", 00:16:35.047 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:35.047 "strip_size_kb": 0, 00:16:35.047 "state": "online", 00:16:35.047 "raid_level": "raid1", 00:16:35.047 "superblock": true, 00:16:35.047 "num_base_bdevs": 2, 00:16:35.047 "num_base_bdevs_discovered": 2, 00:16:35.047 "num_base_bdevs_operational": 2, 00:16:35.047 "base_bdevs_list": [ 00:16:35.047 { 00:16:35.047 "name": "spare", 00:16:35.047 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:35.047 "is_configured": true, 00:16:35.047 "data_offset": 256, 00:16:35.047 "data_size": 7936 00:16:35.047 }, 00:16:35.047 { 00:16:35.047 "name": "BaseBdev2", 00:16:35.047 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:35.047 "is_configured": true, 00:16:35.047 "data_offset": 256, 00:16:35.047 "data_size": 7936 00:16:35.047 } 00:16:35.047 ] 00:16:35.047 }' 00:16:35.047 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.308 "name": "raid_bdev1", 00:16:35.308 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:35.308 "strip_size_kb": 0, 00:16:35.308 "state": "online", 00:16:35.308 "raid_level": "raid1", 00:16:35.308 "superblock": true, 00:16:35.308 "num_base_bdevs": 2, 00:16:35.308 "num_base_bdevs_discovered": 2, 00:16:35.308 "num_base_bdevs_operational": 2, 00:16:35.308 "base_bdevs_list": [ 00:16:35.308 { 00:16:35.308 "name": "spare", 00:16:35.308 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:35.308 "is_configured": true, 00:16:35.308 "data_offset": 256, 00:16:35.308 "data_size": 7936 00:16:35.308 }, 00:16:35.308 { 00:16:35.308 "name": "BaseBdev2", 00:16:35.308 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:35.308 "is_configured": true, 00:16:35.308 "data_offset": 256, 00:16:35.308 "data_size": 7936 00:16:35.308 } 00:16:35.308 ] 00:16:35.308 }' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.308 05:02:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.308 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.308 "name": "raid_bdev1", 00:16:35.308 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:35.308 "strip_size_kb": 0, 00:16:35.308 "state": "online", 00:16:35.308 "raid_level": "raid1", 00:16:35.308 "superblock": true, 00:16:35.308 "num_base_bdevs": 2, 00:16:35.308 "num_base_bdevs_discovered": 2, 00:16:35.308 "num_base_bdevs_operational": 2, 00:16:35.308 "base_bdevs_list": [ 00:16:35.308 { 00:16:35.308 "name": "spare", 00:16:35.308 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 256, 00:16:35.309 "data_size": 7936 00:16:35.309 }, 00:16:35.309 { 00:16:35.309 "name": "BaseBdev2", 00:16:35.309 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 256, 00:16:35.309 "data_size": 7936 00:16:35.309 } 00:16:35.309 ] 00:16:35.309 }' 00:16:35.309 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.309 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.880 [2024-11-21 05:02:52.451385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.880 [2024-11-21 05:02:52.451452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.880 [2024-11-21 05:02:52.451560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.880 [2024-11-21 05:02:52.451658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.880 [2024-11-21 05:02:52.451671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.880 [2024-11-21 05:02:52.519253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.880 [2024-11-21 05:02:52.519302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.880 [2024-11-21 05:02:52.519320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:35.880 [2024-11-21 05:02:52.519329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.880 [2024-11-21 05:02:52.521274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.880 [2024-11-21 05:02:52.521315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.880 [2024-11-21 05:02:52.521363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.880 [2024-11-21 05:02:52.521415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.880 [2024-11-21 05:02:52.521528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.880 spare 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.880 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.141 [2024-11-21 05:02:52.621409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:36.141 [2024-11-21 05:02:52.621432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:36.141 [2024-11-21 05:02:52.621521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:36.141 [2024-11-21 05:02:52.621596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:36.141 [2024-11-21 05:02:52.621607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:36.141 [2024-11-21 05:02:52.621671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.141 "name": "raid_bdev1", 00:16:36.141 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:36.141 "strip_size_kb": 0, 00:16:36.141 "state": "online", 00:16:36.141 "raid_level": "raid1", 00:16:36.141 "superblock": true, 00:16:36.141 "num_base_bdevs": 2, 00:16:36.141 "num_base_bdevs_discovered": 2, 00:16:36.141 "num_base_bdevs_operational": 2, 00:16:36.141 "base_bdevs_list": [ 00:16:36.141 { 00:16:36.141 "name": "spare", 00:16:36.141 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:36.141 "is_configured": true, 00:16:36.141 "data_offset": 256, 00:16:36.141 "data_size": 7936 00:16:36.141 }, 00:16:36.141 { 00:16:36.141 "name": "BaseBdev2", 00:16:36.141 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:36.141 "is_configured": true, 00:16:36.141 "data_offset": 256, 00:16:36.141 "data_size": 7936 00:16:36.141 } 00:16:36.141 ] 00:16:36.141 }' 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.141 05:02:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.401 "name": "raid_bdev1", 00:16:36.401 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:36.401 "strip_size_kb": 0, 00:16:36.401 "state": "online", 00:16:36.401 "raid_level": "raid1", 00:16:36.401 "superblock": true, 00:16:36.401 "num_base_bdevs": 2, 00:16:36.401 "num_base_bdevs_discovered": 2, 00:16:36.401 "num_base_bdevs_operational": 2, 00:16:36.401 "base_bdevs_list": [ 00:16:36.401 { 00:16:36.401 "name": "spare", 00:16:36.401 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:36.401 "is_configured": true, 00:16:36.401 "data_offset": 256, 00:16:36.401 "data_size": 7936 00:16:36.401 }, 00:16:36.401 { 00:16:36.401 "name": "BaseBdev2", 00:16:36.401 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:36.401 "is_configured": true, 00:16:36.401 "data_offset": 256, 00:16:36.401 "data_size": 7936 00:16:36.401 } 00:16:36.401 ] 00:16:36.401 }' 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.401 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.661 [2024-11-21 05:02:53.198189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.661 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.661 "name": "raid_bdev1", 00:16:36.661 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:36.661 "strip_size_kb": 0, 00:16:36.661 "state": "online", 00:16:36.661 "raid_level": "raid1", 00:16:36.662 "superblock": true, 00:16:36.662 "num_base_bdevs": 2, 00:16:36.662 "num_base_bdevs_discovered": 1, 00:16:36.662 "num_base_bdevs_operational": 1, 00:16:36.662 "base_bdevs_list": [ 00:16:36.662 { 00:16:36.662 "name": null, 00:16:36.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.662 "is_configured": false, 00:16:36.662 "data_offset": 0, 00:16:36.662 "data_size": 7936 00:16:36.662 }, 00:16:36.662 { 00:16:36.662 "name": "BaseBdev2", 00:16:36.662 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:36.662 "is_configured": true, 00:16:36.662 "data_offset": 256, 00:16:36.662 "data_size": 7936 00:16:36.662 } 00:16:36.662 ] 00:16:36.662 }' 00:16:36.662 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.662 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.232 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.232 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.232 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.232 [2024-11-21 05:02:53.665394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.232 [2024-11-21 05:02:53.665543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.232 [2024-11-21 05:02:53.665556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.232 [2024-11-21 05:02:53.665589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.232 [2024-11-21 05:02:53.669184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.232 [2024-11-21 05:02:53.671002] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.232 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.232 05:02:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.172 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.172 "name": "raid_bdev1", 00:16:38.172 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:38.172 "strip_size_kb": 0, 00:16:38.172 "state": "online", 00:16:38.172 "raid_level": "raid1", 00:16:38.172 "superblock": true, 00:16:38.172 "num_base_bdevs": 2, 00:16:38.172 "num_base_bdevs_discovered": 2, 00:16:38.172 "num_base_bdevs_operational": 2, 00:16:38.172 "process": { 00:16:38.172 "type": "rebuild", 00:16:38.172 "target": "spare", 00:16:38.172 "progress": { 00:16:38.172 "blocks": 2560, 00:16:38.172 "percent": 32 00:16:38.172 } 00:16:38.172 }, 00:16:38.173 "base_bdevs_list": [ 00:16:38.173 { 00:16:38.173 "name": "spare", 00:16:38.173 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:38.173 "is_configured": true, 00:16:38.173 "data_offset": 256, 00:16:38.173 "data_size": 7936 00:16:38.173 }, 00:16:38.173 { 00:16:38.173 "name": "BaseBdev2", 00:16:38.173 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:38.173 "is_configured": true, 00:16:38.173 "data_offset": 256, 00:16:38.173 "data_size": 7936 00:16:38.173 } 00:16:38.173 ] 00:16:38.173 }' 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.173 [2024-11-21 05:02:54.813800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.173 [2024-11-21 05:02:54.875037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.173 [2024-11-21 05:02:54.875168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.173 [2024-11-21 05:02:54.875186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.173 [2024-11-21 05:02:54.875194] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.173 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.432 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.432 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.432 "name": "raid_bdev1", 00:16:38.432 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:38.432 "strip_size_kb": 0, 00:16:38.432 "state": "online", 00:16:38.433 "raid_level": "raid1", 00:16:38.433 "superblock": true, 00:16:38.433 "num_base_bdevs": 2, 00:16:38.433 "num_base_bdevs_discovered": 1, 00:16:38.433 "num_base_bdevs_operational": 1, 00:16:38.433 "base_bdevs_list": [ 00:16:38.433 { 00:16:38.433 "name": null, 00:16:38.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.433 "is_configured": false, 00:16:38.433 "data_offset": 0, 00:16:38.433 "data_size": 7936 00:16:38.433 }, 00:16:38.433 { 00:16:38.433 "name": "BaseBdev2", 00:16:38.433 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:38.433 "is_configured": true, 00:16:38.433 "data_offset": 256, 00:16:38.433 "data_size": 7936 00:16:38.433 } 00:16:38.433 ] 00:16:38.433 }' 00:16:38.433 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.433 05:02:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.692 05:02:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.692 05:02:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.692 05:02:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.692 [2024-11-21 05:02:55.358199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.692 [2024-11-21 05:02:55.358300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.692 [2024-11-21 05:02:55.358342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:38.692 [2024-11-21 05:02:55.358370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.692 [2024-11-21 05:02:55.358587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.692 [2024-11-21 05:02:55.358637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.692 [2024-11-21 05:02:55.358725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.692 [2024-11-21 05:02:55.358762] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.692 [2024-11-21 05:02:55.358826] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:38.692 [2024-11-21 05:02:55.358902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.692 [2024-11-21 05:02:55.361900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:38.692 [2024-11-21 05:02:55.363752] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.692 spare 00:16:38.692 05:02:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.692 05:02:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.074 "name": "raid_bdev1", 00:16:40.074 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:40.074 "strip_size_kb": 0, 00:16:40.074 "state": "online", 00:16:40.074 "raid_level": "raid1", 00:16:40.074 "superblock": true, 00:16:40.074 "num_base_bdevs": 2, 00:16:40.074 "num_base_bdevs_discovered": 2, 00:16:40.074 "num_base_bdevs_operational": 2, 00:16:40.074 "process": { 00:16:40.074 "type": "rebuild", 00:16:40.074 "target": "spare", 00:16:40.074 "progress": { 00:16:40.074 "blocks": 2560, 00:16:40.074 "percent": 32 00:16:40.074 } 00:16:40.074 }, 00:16:40.074 "base_bdevs_list": [ 00:16:40.074 { 00:16:40.074 "name": "spare", 00:16:40.074 "uuid": "0ae8f8ea-29cd-59ab-9699-298297a33989", 00:16:40.074 "is_configured": true, 00:16:40.074 "data_offset": 256, 00:16:40.074 "data_size": 7936 00:16:40.074 }, 00:16:40.074 { 00:16:40.074 "name": "BaseBdev2", 00:16:40.074 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:40.074 "is_configured": true, 00:16:40.074 "data_offset": 256, 00:16:40.074 "data_size": 7936 00:16:40.074 } 00:16:40.074 ] 00:16:40.074 }' 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.074 [2024-11-21 05:02:56.530479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.074 [2024-11-21 05:02:56.567780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.074 [2024-11-21 05:02:56.567886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.074 [2024-11-21 05:02:56.567920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.074 [2024-11-21 05:02:56.567943] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.074 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.074 "name": "raid_bdev1", 00:16:40.074 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:40.074 "strip_size_kb": 0, 00:16:40.074 "state": "online", 00:16:40.074 "raid_level": "raid1", 00:16:40.074 "superblock": true, 00:16:40.074 "num_base_bdevs": 2, 00:16:40.074 "num_base_bdevs_discovered": 1, 00:16:40.074 "num_base_bdevs_operational": 1, 00:16:40.074 "base_bdevs_list": [ 00:16:40.074 { 00:16:40.074 "name": null, 00:16:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.075 "is_configured": false, 00:16:40.075 "data_offset": 0, 00:16:40.075 "data_size": 7936 00:16:40.075 }, 00:16:40.075 { 00:16:40.075 "name": "BaseBdev2", 00:16:40.075 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:40.075 "is_configured": true, 00:16:40.075 "data_offset": 256, 00:16:40.075 "data_size": 7936 00:16:40.075 } 00:16:40.075 ] 00:16:40.075 }' 00:16:40.075 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.075 05:02:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.335 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.595 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.595 "name": "raid_bdev1", 00:16:40.595 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:40.595 "strip_size_kb": 0, 00:16:40.595 "state": "online", 00:16:40.596 "raid_level": "raid1", 00:16:40.596 "superblock": true, 00:16:40.596 "num_base_bdevs": 2, 00:16:40.596 "num_base_bdevs_discovered": 1, 00:16:40.596 "num_base_bdevs_operational": 1, 00:16:40.596 "base_bdevs_list": [ 00:16:40.596 { 00:16:40.596 "name": null, 00:16:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.596 "is_configured": false, 00:16:40.596 "data_offset": 0, 00:16:40.596 "data_size": 7936 00:16:40.596 }, 00:16:40.596 { 00:16:40.596 "name": "BaseBdev2", 00:16:40.596 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:40.596 "is_configured": true, 00:16:40.596 "data_offset": 256, 00:16:40.596 "data_size": 7936 00:16:40.596 } 00:16:40.596 ] 00:16:40.596 }' 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 [2024-11-21 05:02:57.154822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.596 [2024-11-21 05:02:57.154875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.596 [2024-11-21 05:02:57.154893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:40.596 [2024-11-21 05:02:57.154903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.596 [2024-11-21 05:02:57.155046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.596 [2024-11-21 05:02:57.155063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.596 [2024-11-21 05:02:57.155117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:40.596 [2024-11-21 05:02:57.155132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.596 [2024-11-21 05:02:57.155140] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.596 [2024-11-21 05:02:57.155152] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:40.596 BaseBdev1 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 05:02:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.534 "name": "raid_bdev1", 00:16:41.534 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:41.534 "strip_size_kb": 0, 00:16:41.534 "state": "online", 00:16:41.534 "raid_level": "raid1", 00:16:41.534 "superblock": true, 00:16:41.534 "num_base_bdevs": 2, 00:16:41.534 "num_base_bdevs_discovered": 1, 00:16:41.534 "num_base_bdevs_operational": 1, 00:16:41.534 "base_bdevs_list": [ 00:16:41.534 { 00:16:41.534 "name": null, 00:16:41.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.534 "is_configured": false, 00:16:41.534 "data_offset": 0, 00:16:41.534 "data_size": 7936 00:16:41.534 }, 00:16:41.534 { 00:16:41.534 "name": "BaseBdev2", 00:16:41.534 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:41.534 "is_configured": true, 00:16:41.534 "data_offset": 256, 00:16:41.534 "data_size": 7936 00:16:41.534 } 00:16:41.534 ] 00:16:41.534 }' 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.534 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.104 "name": "raid_bdev1", 00:16:42.104 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:42.104 "strip_size_kb": 0, 00:16:42.104 "state": "online", 00:16:42.104 "raid_level": "raid1", 00:16:42.104 "superblock": true, 00:16:42.104 "num_base_bdevs": 2, 00:16:42.104 "num_base_bdevs_discovered": 1, 00:16:42.104 "num_base_bdevs_operational": 1, 00:16:42.104 "base_bdevs_list": [ 00:16:42.104 { 00:16:42.104 "name": null, 00:16:42.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.104 "is_configured": false, 00:16:42.104 "data_offset": 0, 00:16:42.104 "data_size": 7936 00:16:42.104 }, 00:16:42.104 { 00:16:42.104 "name": "BaseBdev2", 00:16:42.104 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:42.104 "is_configured": true, 00:16:42.104 "data_offset": 256, 00:16:42.104 "data_size": 7936 00:16:42.104 } 00:16:42.104 ] 00:16:42.104 }' 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 [2024-11-21 05:02:58.748115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.104 [2024-11-21 05:02:58.748309] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.104 [2024-11-21 05:02:58.748371] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:42.104 request: 00:16:42.104 { 00:16:42.104 "base_bdev": "BaseBdev1", 00:16:42.104 "raid_bdev": "raid_bdev1", 00:16:42.104 "method": "bdev_raid_add_base_bdev", 00:16:42.104 "req_id": 1 00:16:42.104 } 00:16:42.104 Got JSON-RPC error response 00:16:42.104 response: 00:16:42.104 { 00:16:42.104 "code": -22, 00:16:42.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:42.104 } 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.104 05:02:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.102 "name": "raid_bdev1", 00:16:43.102 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:43.102 "strip_size_kb": 0, 00:16:43.102 "state": "online", 00:16:43.102 "raid_level": "raid1", 00:16:43.102 "superblock": true, 00:16:43.102 "num_base_bdevs": 2, 00:16:43.102 "num_base_bdevs_discovered": 1, 00:16:43.102 "num_base_bdevs_operational": 1, 00:16:43.102 "base_bdevs_list": [ 00:16:43.102 { 00:16:43.102 "name": null, 00:16:43.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.102 "is_configured": false, 00:16:43.102 "data_offset": 0, 00:16:43.102 "data_size": 7936 00:16:43.102 }, 00:16:43.102 { 00:16:43.102 "name": "BaseBdev2", 00:16:43.102 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:43.102 "is_configured": true, 00:16:43.102 "data_offset": 256, 00:16:43.102 "data_size": 7936 00:16:43.102 } 00:16:43.102 ] 00:16:43.102 }' 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.102 05:02:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.675 "name": "raid_bdev1", 00:16:43.675 "uuid": "4c6862de-4a0d-4875-b759-933235449289", 00:16:43.675 "strip_size_kb": 0, 00:16:43.675 "state": "online", 00:16:43.675 "raid_level": "raid1", 00:16:43.675 "superblock": true, 00:16:43.675 "num_base_bdevs": 2, 00:16:43.675 "num_base_bdevs_discovered": 1, 00:16:43.675 "num_base_bdevs_operational": 1, 00:16:43.675 "base_bdevs_list": [ 00:16:43.675 { 00:16:43.675 "name": null, 00:16:43.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.675 "is_configured": false, 00:16:43.675 "data_offset": 0, 00:16:43.675 "data_size": 7936 00:16:43.675 }, 00:16:43.675 { 00:16:43.675 "name": "BaseBdev2", 00:16:43.675 "uuid": "93042f2d-a2da-5e6d-be16-9b7c69af1051", 00:16:43.675 "is_configured": true, 00:16:43.675 "data_offset": 256, 00:16:43.675 "data_size": 7936 00:16:43.675 } 00:16:43.675 ] 00:16:43.675 }' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99496 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99496 ']' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99496 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.675 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99496 00:16:43.936 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.936 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.936 killing process with pid 99496 00:16:43.936 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.936 00:16:43.936 Latency(us) 00:16:43.936 [2024-11-21T05:03:00.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.936 [2024-11-21T05:03:00.671Z] =================================================================================================================== 00:16:43.936 [2024-11-21T05:03:00.671Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.936 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99496' 00:16:43.936 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 99496 00:16:43.936 [2024-11-21 05:03:00.438877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.936 [2024-11-21 05:03:00.438981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.936 [2024-11-21 05:03:00.439028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.936 [2024-11-21 05:03:00.439036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:43.936 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 99496 00:16:43.936 [2024-11-21 05:03:00.472745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.196 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.196 00:16:44.196 real 0m16.263s 00:16:44.196 user 0m21.768s 00:16:44.196 sys 0m1.737s 00:16:44.196 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.196 ************************************ 00:16:44.196 END TEST raid_rebuild_test_sb_md_interleaved 00:16:44.196 ************************************ 00:16:44.197 05:03:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.197 05:03:00 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:44.197 05:03:00 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:44.197 05:03:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99496 ']' 00:16:44.197 05:03:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99496 00:16:44.197 05:03:00 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:44.197 00:16:44.197 real 10m2.457s 00:16:44.197 user 14m9.992s 00:16:44.197 sys 1m52.871s 00:16:44.197 ************************************ 00:16:44.197 END TEST bdev_raid 00:16:44.197 ************************************ 00:16:44.197 05:03:00 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.197 05:03:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.197 05:03:00 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:44.197 05:03:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.197 05:03:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.197 05:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:44.197 ************************************ 00:16:44.197 START TEST spdkcli_raid 00:16:44.197 ************************************ 00:16:44.197 05:03:00 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:44.457 * Looking for test storage... 00:16:44.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:44.457 05:03:00 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.457 05:03:00 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.457 05:03:00 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.457 05:03:01 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.457 --rc genhtml_branch_coverage=1 00:16:44.457 --rc genhtml_function_coverage=1 00:16:44.457 --rc genhtml_legend=1 00:16:44.457 --rc geninfo_all_blocks=1 00:16:44.457 --rc geninfo_unexecuted_blocks=1 00:16:44.457 00:16:44.457 ' 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.457 --rc genhtml_branch_coverage=1 00:16:44.457 --rc genhtml_function_coverage=1 00:16:44.457 --rc genhtml_legend=1 00:16:44.457 --rc geninfo_all_blocks=1 00:16:44.457 --rc geninfo_unexecuted_blocks=1 00:16:44.457 00:16:44.457 ' 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.457 --rc genhtml_branch_coverage=1 00:16:44.457 --rc genhtml_function_coverage=1 00:16:44.457 --rc genhtml_legend=1 00:16:44.457 --rc geninfo_all_blocks=1 00:16:44.457 --rc geninfo_unexecuted_blocks=1 00:16:44.457 00:16:44.457 ' 00:16:44.457 05:03:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.458 --rc genhtml_branch_coverage=1 00:16:44.458 --rc genhtml_function_coverage=1 00:16:44.458 --rc genhtml_legend=1 00:16:44.458 --rc geninfo_all_blocks=1 00:16:44.458 --rc geninfo_unexecuted_blocks=1 00:16:44.458 00:16:44.458 ' 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:44.458 05:03:01 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100167 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:44.458 05:03:01 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100167 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 100167 ']' 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.458 05:03:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.718 [2024-11-21 05:03:01.223398] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:44.718 [2024-11-21 05:03:01.223652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100167 ] 00:16:44.718 [2024-11-21 05:03:01.402320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.718 [2024-11-21 05:03:01.430859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.718 [2024-11-21 05:03:01.430951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:16:45.657 05:03:02 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.657 05:03:02 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.657 05:03:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.657 05:03:02 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:45.657 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:45.657 ' 00:16:47.039 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:47.039 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:47.298 05:03:03 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:47.298 05:03:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.298 05:03:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 05:03:03 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:47.298 05:03:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.298 05:03:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 05:03:03 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:47.298 ' 00:16:48.236 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:48.495 05:03:04 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:48.495 05:03:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.495 05:03:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.495 05:03:05 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:48.495 05:03:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.495 05:03:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.495 05:03:05 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:48.495 05:03:05 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:49.064 05:03:05 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:49.064 05:03:05 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:49.064 05:03:05 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:49.064 05:03:05 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.064 05:03:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.064 05:03:05 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:49.064 05:03:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.064 05:03:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.064 05:03:05 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:49.064 ' 00:16:50.003 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:50.003 05:03:06 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:50.003 05:03:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.003 05:03:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.263 05:03:06 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:50.263 05:03:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.263 05:03:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.263 05:03:06 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:50.263 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:50.263 ' 00:16:51.646 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:51.647 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:51.647 05:03:08 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.647 05:03:08 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100167 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100167 ']' 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100167 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100167 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100167' 00:16:51.647 killing process with pid 100167 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 100167 00:16:51.647 05:03:08 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 100167 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100167 ']' 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100167 00:16:52.217 05:03:08 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100167 ']' 00:16:52.217 05:03:08 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100167 00:16:52.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (100167) - No such process 00:16:52.217 05:03:08 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 100167 is not found' 00:16:52.217 Process with pid 100167 is not found 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:52.217 05:03:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:52.217 00:16:52.217 real 0m7.843s 00:16:52.217 user 0m16.566s 00:16:52.217 sys 0m1.154s 00:16:52.217 05:03:08 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.217 05:03:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.217 ************************************ 00:16:52.217 END TEST spdkcli_raid 00:16:52.217 ************************************ 00:16:52.217 05:03:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:52.217 05:03:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.217 05:03:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.217 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.217 ************************************ 00:16:52.217 START TEST blockdev_raid5f 00:16:52.217 ************************************ 00:16:52.217 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:52.217 * Looking for test storage... 00:16:52.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:52.217 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.217 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.217 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.478 05:03:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.478 --rc genhtml_branch_coverage=1 00:16:52.478 --rc genhtml_function_coverage=1 00:16:52.478 --rc genhtml_legend=1 00:16:52.478 --rc geninfo_all_blocks=1 00:16:52.478 --rc geninfo_unexecuted_blocks=1 00:16:52.478 00:16:52.478 ' 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.478 --rc genhtml_branch_coverage=1 00:16:52.478 --rc genhtml_function_coverage=1 00:16:52.478 --rc genhtml_legend=1 00:16:52.478 --rc geninfo_all_blocks=1 00:16:52.478 --rc geninfo_unexecuted_blocks=1 00:16:52.478 00:16:52.478 ' 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.478 --rc genhtml_branch_coverage=1 00:16:52.478 --rc genhtml_function_coverage=1 00:16:52.478 --rc genhtml_legend=1 00:16:52.478 --rc geninfo_all_blocks=1 00:16:52.478 --rc geninfo_unexecuted_blocks=1 00:16:52.478 00:16:52.478 ' 00:16:52.478 05:03:08 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.478 --rc genhtml_branch_coverage=1 00:16:52.478 --rc genhtml_function_coverage=1 00:16:52.478 --rc genhtml_legend=1 00:16:52.478 --rc geninfo_all_blocks=1 00:16:52.478 --rc geninfo_unexecuted_blocks=1 00:16:52.478 00:16:52.478 ' 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:52.478 05:03:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:52.478 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100425 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:52.479 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100425 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 100425 ']' 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.479 05:03:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.479 [2024-11-21 05:03:09.121916] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:52.479 [2024-11-21 05:03:09.122132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100425 ] 00:16:52.738 [2024-11-21 05:03:09.298405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.738 [2024-11-21 05:03:09.325867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.306 05:03:09 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.306 05:03:09 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.307 Malloc0 00:16:53.307 Malloc1 00:16:53.307 Malloc2 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.307 05:03:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.307 05:03:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.307 05:03:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.307 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:53.307 05:03:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.307 05:03:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "efde35a2-3a37-4193-a9fd-4fbb774906fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "efde35a2-3a37-4193-a9fd-4fbb774906fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "efde35a2-3a37-4193-a9fd-4fbb774906fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ae15117f-3fff-452b-bae0-2d75d0665cd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "69a128d3-53c6-45b7-a820-52d844b45226",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3639bf96-d2cd-4076-abc9-1da9c21fab7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:53.567 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100425 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 100425 ']' 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 100425 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100425 00:16:53.567 killing process with pid 100425 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100425' 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 100425 00:16:53.567 05:03:10 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 100425 00:16:54.137 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:54.137 05:03:10 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:54.137 05:03:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:54.137 05:03:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.137 05:03:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:54.137 ************************************ 00:16:54.137 START TEST bdev_hello_world 00:16:54.137 ************************************ 00:16:54.137 05:03:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:54.137 [2024-11-21 05:03:10.685608] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:54.137 [2024-11-21 05:03:10.685805] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100465 ] 00:16:54.137 [2024-11-21 05:03:10.853137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.397 [2024-11-21 05:03:10.881797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.397 [2024-11-21 05:03:11.058347] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:54.397 [2024-11-21 05:03:11.058495] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:54.397 [2024-11-21 05:03:11.058515] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:54.397 [2024-11-21 05:03:11.058861] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:54.397 [2024-11-21 05:03:11.058993] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:54.397 [2024-11-21 05:03:11.059010] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:54.397 [2024-11-21 05:03:11.059059] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:54.397 00:16:54.398 [2024-11-21 05:03:11.059076] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:54.658 00:16:54.658 real 0m0.697s 00:16:54.658 user 0m0.379s 00:16:54.658 sys 0m0.211s 00:16:54.658 ************************************ 00:16:54.658 END TEST bdev_hello_world 00:16:54.658 ************************************ 00:16:54.658 05:03:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.658 05:03:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:54.658 05:03:11 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:54.658 05:03:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:54.658 05:03:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.658 05:03:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:54.658 ************************************ 00:16:54.658 START TEST bdev_bounds 00:16:54.658 ************************************ 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100490 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100490' 00:16:54.658 Process bdevio pid: 100490 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100490 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 100490 ']' 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.658 05:03:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:54.918 [2024-11-21 05:03:11.471833] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:54.918 [2024-11-21 05:03:11.471972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100490 ] 00:16:54.918 [2024-11-21 05:03:11.642141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.178 [2024-11-21 05:03:11.672778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.178 [2024-11-21 05:03:11.672882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.178 [2024-11-21 05:03:11.673008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.747 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.747 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:55.747 05:03:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:55.747 I/O targets: 00:16:55.747 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:55.747 00:16:55.747 00:16:55.747 CUnit - A unit testing framework for C - Version 2.1-3 00:16:55.747 http://cunit.sourceforge.net/ 00:16:55.747 00:16:55.747 00:16:55.747 Suite: bdevio tests on: raid5f 00:16:55.747 Test: blockdev write read block ...passed 00:16:55.747 Test: blockdev write zeroes read block ...passed 00:16:55.747 Test: blockdev write zeroes read no split ...passed 00:16:55.747 Test: blockdev write zeroes read split ...passed 00:16:56.007 Test: blockdev write zeroes read split partial ...passed 00:16:56.007 Test: blockdev reset ...passed 00:16:56.007 Test: blockdev write read 8 blocks ...passed 00:16:56.007 Test: blockdev write read size > 128k ...passed 00:16:56.007 Test: blockdev write read invalid size ...passed 00:16:56.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:56.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:56.007 Test: blockdev write read max offset ...passed 00:16:56.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:56.007 Test: blockdev writev readv 8 blocks ...passed 00:16:56.007 Test: blockdev writev readv 30 x 1block ...passed 00:16:56.007 Test: blockdev writev readv block ...passed 00:16:56.007 Test: blockdev writev readv size > 128k ...passed 00:16:56.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:56.007 Test: blockdev comparev and writev ...passed 00:16:56.007 Test: blockdev nvme passthru rw ...passed 00:16:56.007 Test: blockdev nvme passthru vendor specific ...passed 00:16:56.007 Test: blockdev nvme admin passthru ...passed 00:16:56.007 Test: blockdev copy ...passed 00:16:56.007 00:16:56.007 Run Summary: Type Total Ran Passed Failed Inactive 00:16:56.007 suites 1 1 n/a 0 0 00:16:56.007 tests 23 23 23 0 0 00:16:56.007 asserts 130 130 130 0 n/a 00:16:56.007 00:16:56.007 Elapsed time = 0.339 seconds 00:16:56.007 0 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100490 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 100490 ']' 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 100490 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100490 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100490' 00:16:56.007 killing process with pid 100490 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 100490 00:16:56.007 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 100490 00:16:56.268 05:03:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:56.268 00:16:56.268 real 0m1.468s 00:16:56.268 user 0m3.515s 00:16:56.268 sys 0m0.375s 00:16:56.268 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.268 05:03:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 ************************************ 00:16:56.268 END TEST bdev_bounds 00:16:56.268 ************************************ 00:16:56.268 05:03:12 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:56.268 05:03:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:56.268 05:03:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.268 05:03:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 ************************************ 00:16:56.268 START TEST bdev_nbd 00:16:56.268 ************************************ 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:56.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100539 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100539 /var/tmp/spdk-nbd.sock 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 100539 ']' 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.268 05:03:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:56.529 [2024-11-21 05:03:13.031764] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:56.529 [2024-11-21 05:03:13.031910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.529 [2024-11-21 05:03:13.210382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.529 [2024-11-21 05:03:13.235640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:57.469 05:03:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.469 1+0 records in 00:16:57.469 1+0 records out 00:16:57.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629369 s, 6.5 MB/s 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:57.469 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:57.470 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:57.730 { 00:16:57.730 "nbd_device": "/dev/nbd0", 00:16:57.730 "bdev_name": "raid5f" 00:16:57.730 } 00:16:57.730 ]' 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:57.730 { 00:16:57.730 "nbd_device": "/dev/nbd0", 00:16:57.730 "bdev_name": "raid5f" 00:16:57.730 } 00:16:57.730 ]' 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.730 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.990 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.250 05:03:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:58.510 /dev/nbd0 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.511 1+0 records in 00:16:58.511 1+0 records out 00:16:58.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588922 s, 7.0 MB/s 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.511 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:58.771 { 00:16:58.771 "nbd_device": "/dev/nbd0", 00:16:58.771 "bdev_name": "raid5f" 00:16:58.771 } 00:16:58.771 ]' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:58.771 { 00:16:58.771 "nbd_device": "/dev/nbd0", 00:16:58.771 "bdev_name": "raid5f" 00:16:58.771 } 00:16:58.771 ]' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:58.771 256+0 records in 00:16:58.771 256+0 records out 00:16:58.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119402 s, 87.8 MB/s 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:58.771 256+0 records in 00:16:58.771 256+0 records out 00:16:58.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290954 s, 36.0 MB/s 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:58.771 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.772 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:58.772 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.772 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.032 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:59.292 05:03:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:59.552 malloc_lvol_verify 00:16:59.552 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:59.552 b51b8bab-e793-475e-a793-44302496c7c5 00:16:59.812 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:59.812 2fef4df4-9a0a-401f-b9db-85c104a55e3c 00:16:59.812 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:00.072 /dev/nbd0 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:00.072 mke2fs 1.47.0 (5-Feb-2023) 00:17:00.072 Discarding device blocks: 0/4096 done 00:17:00.072 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:00.072 00:17:00.072 Allocating group tables: 0/1 done 00:17:00.072 Writing inode tables: 0/1 done 00:17:00.072 Creating journal (1024 blocks): done 00:17:00.072 Writing superblocks and filesystem accounting information: 0/1 done 00:17:00.072 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.072 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100539 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 100539 ']' 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 100539 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100539 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.332 killing process with pid 100539 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100539' 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 100539 00:17:00.332 05:03:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 100539 00:17:00.593 05:03:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:00.593 00:17:00.593 real 0m4.295s 00:17:00.593 user 0m6.183s 00:17:00.593 sys 0m1.294s 00:17:00.593 05:03:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.593 05:03:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:00.593 ************************************ 00:17:00.593 END TEST bdev_nbd 00:17:00.593 ************************************ 00:17:00.593 05:03:17 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:00.593 05:03:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:00.593 05:03:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:00.593 05:03:17 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:00.593 05:03:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.593 05:03:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.593 05:03:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.593 ************************************ 00:17:00.593 START TEST bdev_fio 00:17:00.593 ************************************ 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:00.593 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:00.593 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:00.854 ************************************ 00:17:00.854 START TEST bdev_fio_rw_verify 00:17:00.854 ************************************ 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:00.854 05:03:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:01.115 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.115 fio-3.35 00:17:01.115 Starting 1 thread 00:17:13.395 00:17:13.395 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100722: Thu Nov 21 05:03:28 2024 00:17:13.395 read: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(493MiB/10001msec) 00:17:13.395 slat (nsec): min=16825, max=52312, avg=18546.50, stdev=1811.01 00:17:13.395 clat (usec): min=10, max=282, avg=126.92, stdev=43.98 00:17:13.395 lat (usec): min=29, max=302, avg=145.47, stdev=44.18 00:17:13.395 clat percentiles (usec): 00:17:13.395 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 225], 99.990th=[ 253], 00:17:13.395 | 99.999th=[ 277] 00:17:13.395 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(509MiB/9879msec); 0 zone resets 00:17:13.395 slat (usec): min=7, max=312, avg=16.18, stdev= 3.85 00:17:13.395 clat (usec): min=58, max=1729, avg=291.89, stdev=43.64 00:17:13.395 lat (usec): min=73, max=2030, avg=308.07, stdev=44.95 00:17:13.395 clat percentiles (usec): 00:17:13.395 | 50.000th=[ 297], 99.000th=[ 367], 99.900th=[ 660], 99.990th=[ 1418], 00:17:13.395 | 99.999th=[ 1663] 00:17:13.395 bw ( KiB/s): min=49048, max=55792, per=98.93%, avg=52239.58, stdev=1843.20, samples=19 00:17:13.395 iops : min=12262, max=13948, avg=13059.89, stdev=460.80, samples=19 00:17:13.395 lat (usec) : 20=0.01%, 50=0.01%, 100=16.59%, 250=40.46%, 500=42.86% 00:17:13.395 lat (usec) : 750=0.04%, 1000=0.02% 00:17:13.395 lat (msec) : 2=0.02% 00:17:13.395 cpu : usr=98.81%, sys=0.56%, ctx=21, majf=0, minf=13380 00:17:13.395 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.395 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.395 issued rwts: total=126200,130414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.395 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:13.395 00:17:13.395 Run status group 0 (all jobs): 00:17:13.395 READ: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=493MiB (517MB), run=10001-10001msec 00:17:13.395 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=509MiB (534MB), run=9879-9879msec 00:17:13.395 ----------------------------------------------------- 00:17:13.395 Suppressions used: 00:17:13.395 count bytes template 00:17:13.395 1 7 /usr/src/fio/parse.c 00:17:13.395 294 28224 /usr/src/fio/iolog.c 00:17:13.395 1 8 libtcmalloc_minimal.so 00:17:13.395 1 904 libcrypto.so 00:17:13.395 ----------------------------------------------------- 00:17:13.395 00:17:13.395 00:17:13.395 real 0m11.252s 00:17:13.395 user 0m11.530s 00:17:13.395 sys 0m0.689s 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:13.395 ************************************ 00:17:13.395 END TEST bdev_fio_rw_verify 00:17:13.395 ************************************ 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:13.395 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "efde35a2-3a37-4193-a9fd-4fbb774906fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "efde35a2-3a37-4193-a9fd-4fbb774906fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "efde35a2-3a37-4193-a9fd-4fbb774906fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ae15117f-3fff-452b-bae0-2d75d0665cd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "69a128d3-53c6-45b7-a820-52d844b45226",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3639bf96-d2cd-4076-abc9-1da9c21fab7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.396 /home/vagrant/spdk_repo/spdk 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:13.396 00:17:13.396 real 0m11.554s 00:17:13.396 user 0m11.649s 00:17:13.396 sys 0m0.835s 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.396 05:03:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 ************************************ 00:17:13.396 END TEST bdev_fio 00:17:13.396 ************************************ 00:17:13.396 05:03:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:13.396 05:03:28 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:13.396 05:03:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:13.396 05:03:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.396 05:03:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 ************************************ 00:17:13.396 START TEST bdev_verify 00:17:13.396 ************************************ 00:17:13.396 05:03:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:13.396 [2024-11-21 05:03:29.024953] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:13.396 [2024-11-21 05:03:29.025135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100869 ] 00:17:13.396 [2024-11-21 05:03:29.201436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.396 [2024-11-21 05:03:29.230045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.396 [2024-11-21 05:03:29.230185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.396 Running I/O for 5 seconds... 00:17:14.902 11070.00 IOPS, 43.24 MiB/s [2024-11-21T05:03:32.575Z] 11237.50 IOPS, 43.90 MiB/s [2024-11-21T05:03:33.513Z] 11258.33 IOPS, 43.98 MiB/s [2024-11-21T05:03:34.459Z] 11295.00 IOPS, 44.12 MiB/s 00:17:17.724 Latency(us) 00:17:17.724 [2024-11-21T05:03:34.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.724 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:17.724 Verification LBA range: start 0x0 length 0x2000 00:17:17.724 raid5f : 5.01 4487.70 17.53 0.00 0.00 42832.47 264.72 32510.43 00:17:17.724 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:17.724 Verification LBA range: start 0x2000 length 0x2000 00:17:17.724 raid5f : 5.01 6770.50 26.45 0.00 0.00 28433.73 207.48 21063.10 00:17:17.724 [2024-11-21T05:03:34.459Z] =================================================================================================================== 00:17:17.724 [2024-11-21T05:03:34.459Z] Total : 11258.20 43.98 0.00 0.00 34176.59 207.48 32510.43 00:17:17.987 00:17:17.987 real 0m5.734s 00:17:17.987 user 0m10.658s 00:17:17.987 sys 0m0.247s 00:17:17.987 05:03:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.987 05:03:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:17.987 ************************************ 00:17:17.987 END TEST bdev_verify 00:17:17.987 ************************************ 00:17:18.247 05:03:34 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:18.247 05:03:34 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:18.247 05:03:34 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.247 05:03:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.247 ************************************ 00:17:18.247 START TEST bdev_verify_big_io 00:17:18.247 ************************************ 00:17:18.247 05:03:34 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:18.247 [2024-11-21 05:03:34.818769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:18.247 [2024-11-21 05:03:34.818904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100957 ] 00:17:18.508 [2024-11-21 05:03:35.000749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.508 [2024-11-21 05:03:35.029948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.508 [2024-11-21 05:03:35.030021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.508 Running I/O for 5 seconds... 00:17:20.828 695.00 IOPS, 43.44 MiB/s [2024-11-21T05:03:38.503Z] 761.00 IOPS, 47.56 MiB/s [2024-11-21T05:03:39.882Z] 803.67 IOPS, 50.23 MiB/s [2024-11-21T05:03:40.452Z] 825.00 IOPS, 51.56 MiB/s [2024-11-21T05:03:40.713Z] 838.00 IOPS, 52.38 MiB/s 00:17:23.978 Latency(us) 00:17:23.978 [2024-11-21T05:03:40.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.978 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:23.978 Verification LBA range: start 0x0 length 0x200 00:17:23.978 raid5f : 5.26 362.33 22.65 0.00 0.00 8738630.05 219.11 377304.20 00:17:23.978 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:23.978 Verification LBA range: start 0x200 length 0x200 00:17:23.978 raid5f : 5.14 469.30 29.33 0.00 0.00 6822190.32 213.74 300378.10 00:17:23.978 [2024-11-21T05:03:40.713Z] =================================================================================================================== 00:17:23.978 [2024-11-21T05:03:40.713Z] Total : 831.64 51.98 0.00 0.00 7667678.44 213.74 377304.20 00:17:23.978 00:17:23.978 real 0m5.970s 00:17:23.978 user 0m11.135s 00:17:23.978 sys 0m0.245s 00:17:23.978 05:03:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.978 05:03:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.978 ************************************ 00:17:23.978 END TEST bdev_verify_big_io 00:17:23.978 ************************************ 00:17:24.238 05:03:40 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.238 05:03:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:24.238 05:03:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.238 05:03:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.238 ************************************ 00:17:24.238 START TEST bdev_write_zeroes 00:17:24.238 ************************************ 00:17:24.238 05:03:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.238 [2024-11-21 05:03:40.868735] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:24.238 [2024-11-21 05:03:40.868885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101039 ] 00:17:24.498 [2024-11-21 05:03:41.044220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.498 [2024-11-21 05:03:41.072711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.758 Running I/O for 1 seconds... 00:17:25.697 30159.00 IOPS, 117.81 MiB/s 00:17:25.697 Latency(us) 00:17:25.697 [2024-11-21T05:03:42.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.697 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:25.697 raid5f : 1.01 30125.38 117.68 0.00 0.00 4238.37 1323.60 6753.93 00:17:25.697 [2024-11-21T05:03:42.432Z] =================================================================================================================== 00:17:25.697 [2024-11-21T05:03:42.432Z] Total : 30125.38 117.68 0.00 0.00 4238.37 1323.60 6753.93 00:17:25.958 00:17:25.958 real 0m1.701s 00:17:25.958 user 0m1.379s 00:17:25.958 sys 0m0.210s 00:17:25.958 05:03:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.958 05:03:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:25.958 ************************************ 00:17:25.958 END TEST bdev_write_zeroes 00:17:25.958 ************************************ 00:17:25.958 05:03:42 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.958 05:03:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:25.958 05:03:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.958 05:03:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.958 ************************************ 00:17:25.958 START TEST bdev_json_nonenclosed 00:17:25.958 ************************************ 00:17:25.958 05:03:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.958 [2024-11-21 05:03:42.647974] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:25.958 [2024-11-21 05:03:42.648144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101077 ] 00:17:26.218 [2024-11-21 05:03:42.825487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.218 [2024-11-21 05:03:42.853499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.218 [2024-11-21 05:03:42.853602] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:26.218 [2024-11-21 05:03:42.853622] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:26.218 [2024-11-21 05:03:42.853635] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:26.218 00:17:26.218 real 0m0.387s 00:17:26.218 user 0m0.148s 00:17:26.218 sys 0m0.135s 00:17:26.218 05:03:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.218 05:03:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:26.218 ************************************ 00:17:26.218 END TEST bdev_json_nonenclosed 00:17:26.218 ************************************ 00:17:26.479 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.479 05:03:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:26.479 05:03:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.479 05:03:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.479 ************************************ 00:17:26.479 START TEST bdev_json_nonarray 00:17:26.479 ************************************ 00:17:26.479 05:03:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.479 [2024-11-21 05:03:43.104592] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:26.480 [2024-11-21 05:03:43.104712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101101 ] 00:17:26.740 [2024-11-21 05:03:43.275069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.740 [2024-11-21 05:03:43.304248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.740 [2024-11-21 05:03:43.304354] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:26.740 [2024-11-21 05:03:43.304376] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:26.740 [2024-11-21 05:03:43.304388] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:26.740 00:17:26.740 real 0m0.375s 00:17:26.740 user 0m0.149s 00:17:26.740 sys 0m0.123s 00:17:26.740 05:03:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.740 05:03:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:26.740 ************************************ 00:17:26.740 END TEST bdev_json_nonarray 00:17:26.740 ************************************ 00:17:26.740 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:26.740 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:26.740 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:26.741 05:03:43 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:26.741 00:17:26.741 real 0m34.708s 00:17:26.741 user 0m47.085s 00:17:26.741 sys 0m4.794s 00:17:26.741 05:03:43 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.741 05:03:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.741 ************************************ 00:17:26.741 END TEST blockdev_raid5f 00:17:26.741 ************************************ 00:17:27.001 05:03:43 -- spdk/autotest.sh@194 -- # uname -s 00:17:27.001 05:03:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:27.001 05:03:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.001 05:03:43 -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 05:03:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:17:27.001 05:03:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:17:27.001 05:03:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:17:27.001 05:03:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:17:27.001 05:03:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.001 05:03:43 -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 05:03:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:17:27.001 05:03:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:17:27.001 05:03:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:17:27.001 05:03:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.545 INFO: APP EXITING 00:17:29.545 INFO: killing all VMs 00:17:29.545 INFO: killing vhost app 00:17:29.545 INFO: EXIT DONE 00:17:29.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:30.065 Waiting for block devices as requested 00:17:30.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:30.065 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:31.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:31.005 Cleaning 00:17:31.005 Removing: /var/run/dpdk/spdk0/config 00:17:31.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:31.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:31.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:31.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:31.005 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:31.005 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:31.005 Removing: /dev/shm/spdk_tgt_trace.pid69216 00:17:31.005 Removing: /var/run/dpdk/spdk0 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100167 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100425 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100465 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100490 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100707 00:17:31.005 Removing: /var/run/dpdk/spdk_pid100869 00:17:31.265 Removing: /var/run/dpdk/spdk_pid100957 00:17:31.265 Removing: /var/run/dpdk/spdk_pid101039 00:17:31.265 Removing: /var/run/dpdk/spdk_pid101077 00:17:31.265 Removing: /var/run/dpdk/spdk_pid101101 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69052 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69216 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69423 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69510 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69539 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69645 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69663 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69852 00:17:31.265 Removing: /var/run/dpdk/spdk_pid69931 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70016 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70105 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70191 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70225 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70267 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70332 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70449 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70874 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70924 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70976 00:17:31.265 Removing: /var/run/dpdk/spdk_pid70992 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71055 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71067 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71136 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71154 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71196 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71214 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71256 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71274 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71412 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71443 00:17:31.265 Removing: /var/run/dpdk/spdk_pid71527 00:17:31.265 Removing: /var/run/dpdk/spdk_pid72722 00:17:31.265 Removing: /var/run/dpdk/spdk_pid72917 00:17:31.265 Removing: /var/run/dpdk/spdk_pid73057 00:17:31.266 Removing: /var/run/dpdk/spdk_pid73666 00:17:31.266 Removing: /var/run/dpdk/spdk_pid73862 00:17:31.266 Removing: /var/run/dpdk/spdk_pid73997 00:17:31.266 Removing: /var/run/dpdk/spdk_pid74607 00:17:31.266 Removing: /var/run/dpdk/spdk_pid74926 00:17:31.266 Removing: /var/run/dpdk/spdk_pid75055 00:17:31.266 Removing: /var/run/dpdk/spdk_pid76396 00:17:31.266 Removing: /var/run/dpdk/spdk_pid76638 00:17:31.266 Removing: /var/run/dpdk/spdk_pid76767 00:17:31.266 Removing: /var/run/dpdk/spdk_pid78119 00:17:31.266 Removing: /var/run/dpdk/spdk_pid78350 00:17:31.266 Removing: /var/run/dpdk/spdk_pid78487 00:17:31.266 Removing: /var/run/dpdk/spdk_pid79822 00:17:31.266 Removing: /var/run/dpdk/spdk_pid80257 00:17:31.266 Removing: /var/run/dpdk/spdk_pid80386 00:17:31.266 Removing: /var/run/dpdk/spdk_pid81816 00:17:31.266 Removing: /var/run/dpdk/spdk_pid82064 00:17:31.526 Removing: /var/run/dpdk/spdk_pid82193 00:17:31.526 Removing: /var/run/dpdk/spdk_pid83623 00:17:31.526 Removing: /var/run/dpdk/spdk_pid83871 00:17:31.526 Removing: /var/run/dpdk/spdk_pid84000 00:17:31.526 Removing: /var/run/dpdk/spdk_pid85443 00:17:31.526 Removing: /var/run/dpdk/spdk_pid85914 00:17:31.526 Removing: /var/run/dpdk/spdk_pid86043 00:17:31.526 Removing: /var/run/dpdk/spdk_pid86177 00:17:31.526 Removing: /var/run/dpdk/spdk_pid86578 00:17:31.526 Removing: /var/run/dpdk/spdk_pid87294 00:17:31.526 Removing: /var/run/dpdk/spdk_pid87652 00:17:31.526 Removing: /var/run/dpdk/spdk_pid88324 00:17:31.526 Removing: /var/run/dpdk/spdk_pid88758 00:17:31.526 Removing: /var/run/dpdk/spdk_pid89489 00:17:31.526 Removing: /var/run/dpdk/spdk_pid89876 00:17:31.526 Removing: /var/run/dpdk/spdk_pid91791 00:17:31.526 Removing: /var/run/dpdk/spdk_pid92218 00:17:31.526 Removing: /var/run/dpdk/spdk_pid92637 00:17:31.526 Removing: /var/run/dpdk/spdk_pid94686 00:17:31.526 Removing: /var/run/dpdk/spdk_pid95160 00:17:31.526 Removing: /var/run/dpdk/spdk_pid95671 00:17:31.526 Removing: /var/run/dpdk/spdk_pid96723 00:17:31.526 Removing: /var/run/dpdk/spdk_pid97040 00:17:31.526 Removing: /var/run/dpdk/spdk_pid97956 00:17:31.526 Removing: /var/run/dpdk/spdk_pid98265 00:17:31.526 Removing: /var/run/dpdk/spdk_pid99182 00:17:31.526 Removing: /var/run/dpdk/spdk_pid99496 00:17:31.526 Clean 00:17:31.526 05:03:48 -- common/autotest_common.sh@1453 -- # return 0 00:17:31.526 05:03:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:17:31.526 05:03:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.526 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.526 05:03:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:17:31.526 05:03:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.526 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.788 05:03:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:31.788 05:03:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:31.788 05:03:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:31.788 05:03:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:17:31.788 05:03:48 -- spdk/autotest.sh@398 -- # hostname 00:17:31.788 05:03:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:31.788 geninfo: WARNING: invalid characters removed from testname! 00:17:58.402 05:04:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:58.402 05:04:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:59.783 05:04:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:02.325 05:04:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:04.234 05:04:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:06.143 05:04:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:08.055 05:04:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:08.055 05:04:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:08.055 05:04:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:08.055 05:04:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:08.055 05:04:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:08.055 05:04:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:08.055 + [[ -n 6158 ]] 00:18:08.055 + sudo kill 6158 00:18:08.065 [Pipeline] } 00:18:08.081 [Pipeline] // timeout 00:18:08.086 [Pipeline] } 00:18:08.100 [Pipeline] // stage 00:18:08.105 [Pipeline] } 00:18:08.118 [Pipeline] // catchError 00:18:08.128 [Pipeline] stage 00:18:08.130 [Pipeline] { (Stop VM) 00:18:08.142 [Pipeline] sh 00:18:08.425 + vagrant halt 00:18:10.966 ==> default: Halting domain... 00:18:19.117 [Pipeline] sh 00:18:19.401 + vagrant destroy -f 00:18:21.944 ==> default: Removing domain... 00:18:21.958 [Pipeline] sh 00:18:22.326 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:22.338 [Pipeline] } 00:18:22.355 [Pipeline] // stage 00:18:22.360 [Pipeline] } 00:18:22.377 [Pipeline] // dir 00:18:22.382 [Pipeline] } 00:18:22.399 [Pipeline] // wrap 00:18:22.405 [Pipeline] } 00:18:22.422 [Pipeline] // catchError 00:18:22.434 [Pipeline] stage 00:18:22.437 [Pipeline] { (Epilogue) 00:18:22.451 [Pipeline] sh 00:18:22.743 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:26.984 [Pipeline] catchError 00:18:26.987 [Pipeline] { 00:18:27.001 [Pipeline] sh 00:18:27.287 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:27.287 Artifacts sizes are good 00:18:27.297 [Pipeline] } 00:18:27.312 [Pipeline] // catchError 00:18:27.324 [Pipeline] archiveArtifacts 00:18:27.331 Archiving artifacts 00:18:27.454 [Pipeline] cleanWs 00:18:27.467 [WS-CLEANUP] Deleting project workspace... 00:18:27.467 [WS-CLEANUP] Deferred wipeout is used... 00:18:27.474 [WS-CLEANUP] done 00:18:27.476 [Pipeline] } 00:18:27.492 [Pipeline] // stage 00:18:27.497 [Pipeline] } 00:18:27.511 [Pipeline] // node 00:18:27.517 [Pipeline] End of Pipeline 00:18:27.559 Finished: SUCCESS